Systems Analysis
AS Level — Unit 1: Fundamentals of Computer Science
The Systems Development Life Cycle (SDLC)
The Systems Development Life Cycle (SDLC) is a structured framework that defines the stages involved in developing an information system, from initial investigation through to maintenance. It provides a systematic approach to planning, creating, testing, and deploying software.
Stages of the SDLC
The SDLC is typically described as having six main stages. Different sources may group or name them slightly differently, but the core activities remain the same.
1. Feasibility Study and Analysis
This is the investigation phase. The purpose is to understand the current system, identify its problems, and determine the requirements for the new system.
Activities include:
- Investigating the current system – understanding how it works, who uses it, what data it processes, and what its shortcomings are.
- Gathering requirements – determining what the new system must do (functional requirements) and how well it must perform (non-functional requirements such as speed, security, usability).
- Conducting a feasibility study – assessing whether the project is viable (see the detailed section below).
- Producing a requirements specification – a formal document that precisely describes what the system must do.
- Fact-finding – using techniques such as interviews, questionnaires, observation, and document analysis to gather information.
2. Design
The design phase translates the requirements into a detailed plan for the new system. Designers create specifications for:
- Data structures and database design – what data will be stored and how it will be organised (e.g., entity-relationship diagrams, table structures).
- User interface (UI) design – screen layouts, menus, navigation, forms, and reports.
- System architecture – hardware requirements, network configuration, and how components interact.
- Algorithm design – the logic for processing data (using pseudocode or flowcharts).
- Security design – authentication, access levels, encryption, and backup procedures.
- Testing plan – what tests will be carried out and what results are expected.
3. Implementation (Development)
The system is built according to the design specifications:
- Programmers write the code using an appropriate programming language.
- Databases are created and populated.
- The user interface is built.
- Individual modules are developed and then integrated.
- Code is documented with comments and technical documentation.
4. Testing
The system is tested to ensure it works correctly and meets the requirements:
- Unit testing – individual modules or functions are tested in isolation.
- Integration testing – modules are combined and tested together to check they work as a system.
- System testing – the entire system is tested end-to-end against the requirements specification.
- User acceptance testing (UAT) – end users test the system to confirm it meets their needs and is fit for purpose.
- Test data includes normal data (typical valid inputs), boundary data (values at the edge of valid ranges), and erroneous data (invalid inputs that should be rejected).
5. Installation (Deployment)
The new system is deployed and replaces the old system. This involves:
- Choosing and executing an appropriate changeover method (direct, parallel, phased, or pilot).
- Migrating data from the old system to the new system.
- Training users on the new system.
- Providing user documentation (user guides, help files).
6. Maintenance
After deployment, the system must be maintained to keep it operational and up to date:
See the detailed Maintenance section below.
The SDLC is a cycle, not a one-off process. After maintenance, the system may eventually need to be replaced, starting the cycle again. Be prepared to describe each stage and explain what happens during it.
Software Development Methodologies
The SDLC stages can be organised in different ways depending on the chosen development methodology. Each methodology has different strengths and suits different types of project.
Waterfall Model
The waterfall model is the most traditional approach. Each stage is completed fully before the next stage begins, and there is no going back to a previous stage.
Stages flow downward like a waterfall:
- Requirements/Analysis
- Design
- Implementation
- Testing
- Deployment
- Maintenance
| Advantages | Disadvantages |
|---|---|
| Simple and easy to understand | No going back – errors found late are expensive to fix |
| Well-documented at each stage | The client does not see the product until late in the process |
| Clear milestones and deliverables | Requirements must be fully known at the start |
| Good for small, well-defined projects | Not suitable for projects where requirements may change |
| Easy to manage due to rigid structure | Testing only happens after implementation |
Best suited for: Projects with clear, fixed requirements that are unlikely to change, such as safety-critical systems or government contracts.
Agile / Iterative Development
Agile development is an iterative approach where the system is developed in short cycles called iterations (or sprints, typically 2-4 weeks). Each iteration produces a working increment of the software that can be reviewed and tested by the client.
Key principles:
- Working software is delivered frequently and incrementally.
- Requirements can change and evolve throughout the project.
- Close collaboration between developers and clients.
- Regular feedback and adaptation.
- Small, self-organising teams.
| Advantages | Disadvantages |
|---|---|
| Flexible – adapts to changing requirements | Can be difficult to estimate total time and cost |
| Client sees working software early and often | Requires continuous client involvement |
| Bugs are found and fixed early | Less documentation may cause issues later |
| Higher client satisfaction due to involvement | Scope can expand uncontrollably (“scope creep”) |
| Reduced risk – problems are caught each iteration | Not ideal for very large, complex systems |
Best suited for: Projects where requirements are uncertain or likely to change, such as web applications, mobile apps, and startup products.
Spiral Model
The spiral model combines elements of both waterfall and iterative approaches, with a strong emphasis on risk analysis. Development proceeds in spirals (cycles), and each spiral has four phases:
- Planning – determine objectives, alternatives, and constraints.
- Risk analysis – identify and evaluate risks; build prototypes to address uncertainties.
- Development and testing – build and test the product increment.
- Evaluation – review the results with the client and plan the next spiral.
| Advantages | Disadvantages |
|---|---|
| Explicit risk management at every cycle | Complex and expensive to manage |
| Flexible to changing requirements | Requires expertise in risk assessment |
| Suitable for large, high-risk projects | Not suitable for small, low-risk projects |
| Prototyping reduces uncertainty | Can be slow due to repeated risk analysis |
Best suited for: Large, complex, high-risk projects where requirements are unclear, such as military systems or large enterprise software.
RAD (Rapid Application Development)
RAD is a methodology that prioritises rapid prototyping and quick feedback over lengthy planning and documentation. The goal is to produce a working system as quickly as possible.
Key features:
- Heavy use of prototyping – quick, rough versions of the system are built and shown to the client.
- Client feedback drives each iteration of the prototype.
- Minimal planning and documentation compared to waterfall.
- Uses tools such as GUI builders and code generators to speed development.
- The final system evolves from the prototype.
| Advantages | Disadvantages |
|---|---|
| Very fast development | Final product may be lower quality due to speed |
| Client is heavily involved and gives continuous feedback | Requires skilled and experienced developers |
| Reduced risk of building the wrong system | Not suitable for large or complex systems |
| Prototypes help clarify requirements | Poor documentation may cause maintenance difficulties |
| Flexible and adaptable | Relies heavily on client availability |
Best suited for: Small to medium projects with tight deadlines where requirements are not fully defined, such as business applications and user interface design.
V-Model (Verification and Validation Model)
The V-model is an extension of the waterfall model that emphasises testing at each stage. Each development stage on the left side of the “V” has a corresponding testing stage on the right side.
Requirements Analysis <-----------> User Acceptance Testing
System Design <-----------> System Testing
Module Design <-----------> Integration Testing
Coding <-----------> Unit Testing
The left side represents development (verification – “are we building the product right?”), and the right side represents testing (validation – “are we building the right product?”).
| Advantages | Disadvantages |
|---|---|
| Testing is planned from the very start | Rigid – no going back once a stage is complete |
| Clear relationship between development and testing stages | Requirements must be fully known upfront |
| Defects are found early due to early test planning | Not flexible for changing requirements |
| Higher quality and reliability | Expensive for small projects |
| Well-suited for safety-critical systems | No prototypes are produced |
Best suited for: Projects where quality and reliability are critical and requirements are well understood, such as medical devices, avionics, and financial systems.
Prototyping
Prototyping involves building a preliminary version (a prototype) of the system to explore requirements, test ideas, and gather user feedback before building the final product.
Types of prototyping:
- Throwaway (disposable) prototyping – the prototype is built quickly to explore an idea, shown to the client, and then discarded. The final system is built from scratch using the lessons learned.
- Evolutionary prototyping – the prototype is continuously refined based on feedback until it becomes the final system.
| Advantages | Disadvantages |
|---|---|
| Helps clarify vague or uncertain requirements | Can raise unrealistic client expectations |
| Users can see and interact with a working model early | Time spent on prototypes may be wasted (throwaway) |
| Reduces risk of building the wrong system | May lead to poorly structured final code (evolutionary) |
| Encourages user involvement | Can be difficult to know when to stop refining |
Methodology Comparison Summary
| Methodology | Flexibility | Risk Management | Client Involvement | Best For |
|---|---|---|---|---|
| Waterfall | Low | Low | Low (only at start/end) | Fixed, well-defined requirements |
| Agile | High | Medium | High (continuous) | Evolving requirements |
| Spiral | High | High (explicit) | Medium | Large, high-risk projects |
| RAD | High | Low | High (continuous) | Fast delivery, small projects |
| V-Model | Low | Medium (via testing) | Low | Safety-critical, quality-focused |
| Prototyping | High | Medium | High | Unclear requirements |
A common exam question gives you a scenario and asks you to recommend and justify a development methodology. Consider: How well are the requirements defined? Is the project large or small? Is it safety-critical? How involved can the client be? How much risk is involved? Match the methodology to the scenario’s characteristics.
Feasibility Study
A feasibility study is an investigation carried out early in the SDLC to determine whether a proposed project is viable and worth pursuing. It examines the project from multiple perspectives before significant resources are committed.
The feasibility study typically examines five areas, sometimes remembered by the acronym TELOS:
Technical Feasibility
Can the system be built with current technology and expertise?
- Is the required hardware and software available?
- Does the organisation have (or can it acquire) the necessary technical skills?
- Is the proposed technology proven and reliable?
- Can the system integrate with existing systems?
Economic Feasibility (Cost-Benefit Analysis)
Is the system financially worthwhile?
- What are the development costs (hardware, software, staff, training)?
- What are the ongoing running costs (maintenance, hosting, support)?
- What are the expected benefits (cost savings, increased revenue, improved efficiency)?
- Do the benefits outweigh the costs over the system’s expected lifetime?
- What is the payback period (how long before the investment is recovered)?
Legal Feasibility
Will the system comply with all relevant laws and regulations?
- Does it comply with the Data Protection Act / UK GDPR (if handling personal data)?
- Does it comply with the Computer Misuse Act (security requirements)?
- Are there any copyright or licensing issues with the software or data used?
- Does it meet any industry-specific regulations (e.g., health and safety, financial regulations)?
- Are there contractual obligations that affect the project?
Operational Feasibility
Will the system work in practice within the organisation?
- Will the users accept and adopt the new system?
- Is the system compatible with existing business processes?
- Will sufficient training be provided?
- Is the organisation prepared for the changes the system will bring?
- Will the system actually solve the identified problems?
Schedule (Time) Feasibility
Can the system be developed within the required timeframe?
- Is the deadline realistic given the scope of the project?
- Are there enough skilled staff available to meet the schedule?
- Are there any external dependencies that could cause delays?
- What is the impact of missing the deadline?
Remember TELOS: Technical, Economic, Legal, Operational, Schedule. In the exam, you may be asked to discuss the feasibility of a proposed system. Make sure you address multiple aspects of feasibility and relate them specifically to the scenario given.
Fact-Finding Techniques
During the analysis stage, the systems analyst must gather information about the current system and the requirements for the new system. There are several techniques for doing this.
Interviews
Face-to-face (or virtual) conversations with stakeholders, users, and managers.
- Structured interviews have a pre-set list of questions asked in a fixed order.
- Unstructured interviews are more like conversations, allowing the interviewer to follow up on interesting points.
- Semi-structured interviews combine both approaches.
| Advantages | Disadvantages |
|---|---|
| Can ask follow-up questions and probe deeper | Time-consuming to conduct and analyse |
| Body language and tone provide additional information | Interviewee may be biased or tell you what they think you want to hear |
| Can clarify misunderstandings immediately | Difficult to interview large numbers of people |
| Builds rapport with stakeholders | Responses may be subjective |
Questionnaires
Written sets of questions distributed to a large number of people.
- Can include closed questions (yes/no, multiple choice, rating scales) for quantitative data.
- Can include open questions (free text) for qualitative data.
- Can be distributed on paper or electronically.
| Advantages | Disadvantages |
|---|---|
| Can reach a large number of people quickly | No opportunity to ask follow-up questions |
| Inexpensive to distribute and collect | Questions may be misunderstood |
| Responses can be analysed statistically | Low response rates are common |
| Respondents can remain anonymous (encouraging honesty) | Cannot observe body language or tone |
| Consistent – everyone gets the same questions | Poorly designed questions lead to unreliable data |
Observation
Watching users as they perform their tasks in the current system.
| Advantages | Disadvantages |
|---|---|
| See exactly how the current system is used in practice | Time-consuming |
| Can identify inefficiencies and workarounds that users may not mention | Users may behave differently when being observed (Hawthorne effect) |
| Provides first-hand, objective data | Observer may misinterpret what they see |
| No reliance on users accurately describing their own work | Can only observe what is happening now, not what should happen |
Document Analysis
Examining existing documents such as forms, reports, manuals, procedure guides, and data files from the current system.
| Advantages | Disadvantages |
|---|---|
| Provides concrete, factual information about the current system | Documents may be out of date or incomplete |
| Can be done without disrupting users | Does not capture informal processes or workarounds |
| Reveals the structure of data currently used | Large volumes of documents can be overwhelming |
| No scheduling needed – documents are available anytime | Cannot ask the document follow-up questions |
Exam questions often ask you to recommend a fact-finding technique for a given scenario and justify your choice. Consider the number of people involved, the type of information needed, the time available, and whether you need qualitative or quantitative data.
Requirements Specification
A requirements specification (also called a software requirements specification or SRS) is a formal document that describes exactly what the new system must do. It forms a contract between the client and the developers, and all subsequent design, development, and testing are based on it.
Types of Requirements
Functional requirements describe what the system must do:
- The system must allow users to log in with a username and password.
- The system must generate a monthly sales report.
- The system must calculate VAT at the current rate.
- The system must send email notifications when an order is dispatched.
Non-functional requirements describe how well the system must perform:
- Performance – the system must respond to user queries within 2 seconds.
- Security – all passwords must be stored using encryption.
- Usability – the system must be usable by non-technical staff with minimal training.
- Reliability – the system must have 99.9% uptime.
- Scalability – the system must handle up to 10,000 concurrent users.
- Compatibility – the system must work on Windows 10 and above.
Importance of the Requirements Specification
- Provides a clear and agreed understanding between client and developer.
- Acts as a benchmark for testing – testers can verify each requirement is met.
- Helps with project planning – developers can estimate time and cost based on the requirements.
- Reduces the risk of scope creep – changes must be formally agreed.
- Forms a legal basis – in case of disputes about what was agreed.
Top-Down Design and Stepwise Refinement
Top-down design is a problem-solving approach where a complex problem is broken down into a series of smaller, more manageable sub-problems. Each sub-problem is then further broken down until each part is simple enough to be solved directly. This repeated decomposition is called stepwise refinement.
Top-down design is one of the most fundamental techniques in software engineering. Rather than trying to solve an entire problem at once, you start with a high-level overview and progressively add detail.
How Stepwise Refinement Works
- Identify the overall problem – state what the system must do at the highest level.
- Decompose the problem into major sub-tasks (typically 3–6 at each level).
- Refine each sub-task by breaking it down further into smaller steps.
- Repeat until every step is simple enough to be implemented directly in code (i.e. it cannot meaningfully be broken down any further).
Example – Library Book Loan System
- Level 0: Library Book Loan System
-
Level 1: Search for Book Issue Book Return Book Generate Reports -
Level 2 (Issue Book): Validate Member Check Availability Record Loan Print Receipt
Advantages of Top-Down Design
- Makes large problems manageable by dividing them into small, well-defined tasks.
- Enables team working – different programmers can work on different modules.
- Each module can be tested independently (unit testing).
- Produces a clear structure that is easy to understand and maintain.
- Naturally leads to modular, well-organised code.
In an exam, if asked to produce a top-down design, start with a single box at the top representing the whole system, then break it into 3–5 sub-tasks, then break at least one of those sub-tasks down further. Always use verbs for task names (e.g. “Validate Input”, “Calculate Total”) to show they are actions the system performs.
Structure Diagrams
A structure diagram (also called a hierarchy chart) is a graphical representation of a top-down design. It shows the system broken down into modules arranged in a tree-like hierarchy.
Rules for Structure Diagrams
- The root node at the top represents the entire system.
- Each level below shows the sub-tasks that make up the level above.
- Lines connect parent modules to their children, showing which module calls which.
- Modules at the same level are read left to right in the order they would typically execute.
- The lowest-level modules are simple enough to be coded directly.
Example: Student Report System
graph TD
A[Student Report System] --> B[Input Data]
A --> C[Output Reports]
B --> D[Read Marks]
B --> E[Validate Marks]
C --> F[Calculate Averages]
C --> G[Display Results]
Example: ATM System
graph TD
A[ATM] --> B[Authenticate User]
A --> C[Transaction Menu]
A --> D[Print Receipt]
C --> E[Withdraw Cash]
C --> F[Deposit Funds]
C --> G[Check Balance]
A structure diagram visually represents the hierarchical decomposition of a system into modules. It shows which modules call which other modules and provides a clear overview of the system architecture.
Structure diagrams do not show the order of execution, loops, or decisions. They only show the hierarchy of modules. If asked to draw one, keep boxes neat, use clear labels, and ensure every line connects a parent to a child.
Data Flow Diagrams (DFDs)
A Data Flow Diagram (DFD) models how data moves through a system. It shows where data originates, what processes transform it, where it is stored, and where it ends up. DFDs focus entirely on data – they do not show control flow, timing, or decision logic.
DFD Symbols
| Symbol | Shape | Description |
|---|---|---|
| External Entity | Rectangle (square) | A source or destination of data that is outside the system boundary (e.g. a customer, another system) |
| Process | Circle (or rounded rectangle) | A task or action that transforms data. Must have at least one data flow in and one data flow out. Labelled with a verb phrase (e.g. “Validate Order”) |
| Data Store | Open-ended rectangle (two parallel lines) | A place where data is held for later use (e.g. a database table, a file). Labelled with “D1”, “D2”, etc. and a name |
| Data Flow | Arrow (labelled) | Shows the direction data travels. The label describes what data is flowing (e.g. “Order Details”, “Invoice”) |
What DFDs Do NOT Show
- The order or sequence of processes.
- Decision logic (IF/ELSE).
- Loops or repetition.
- How data is processed internally within a process.
Levels of DFDs
DFDs can be drawn at different levels of detail:
Context Diagram (Level 0):
- Shows the entire system as a single process.
- Shows all external entities and the data flows between them and the system.
- Provides a high-level overview of the system boundary.
- Contains no data stores.
Level 1 DFD:
- Expands the single process from the context diagram into its main sub-processes.
- Shows data stores used by the system.
- Shows data flows between processes, data stores, and external entities.
- Every data flow in the context diagram must appear in the Level 1 DFD.
Level 2 DFD (and beyond):
- Further decomposes individual processes from Level 1 into more detailed sub-processes.
- Used when a Level 1 process is still too complex.
Example: Online Ordering System
Context Diagram (Level 0):
- External entities: Customer, Warehouse
- Single process: Online Ordering System
- Data flows: Customer sends “Order Details” to the system; system sends “Order Confirmation” to Customer; system sends “Dispatch Request” to Warehouse; Warehouse sends “Dispatch Confirmation” to system.
Level 1 DFD might include:
- Process 1: Validate Order
- Process 2: Process Payment
- Process 3: Arrange Dispatch
- Data Store D1: Customer Database
- Data Store D2: Product Database
- Data Store D3: Order Database
Rules for Drawing DFDs
- Every process must have at least one input and one output data flow.
- Data cannot flow directly between two external entities (it must pass through a process).
- Data cannot flow directly between two data stores (it must pass through a process).
- Data cannot flow directly from an external entity to a data store (it must pass through a process).
- All data flows must be labelled.
- Processes should be labelled with verb phrases (e.g. “Calculate Total”, not “Totals”).
When drawing DFDs in an exam, the most common errors are: (1) missing labels on data flows, (2) data flowing directly between two data stores or two external entities without passing through a process, and (3) a process with no output. Always check these rules before finishing your diagram.
A Data Flow Diagram (DFD) is a graphical tool that shows the flow of data through a system, including its sources, destinations, processes, and storage. A context diagram is the highest-level DFD showing the whole system as one process.
State-Transition Diagrams
A state-transition diagram (STD) models the behaviour of a system by showing the different states it can be in and the events (or conditions) that cause it to change from one state to another.
Components
| Component | Representation | Description |
|---|---|---|
| State | Rounded rectangle or circle | A condition or situation the system is in at a particular time (e.g. “Idle”, “Processing”, “Error”) |
| Transition | Arrow between states | A change from one state to another, triggered by an event |
| Event/Condition | Label on the transition arrow | What causes the transition (e.g. “Button Pressed”, “Timer Expires”) |
| Action | Label on the transition arrow (after /) | What happens as a result of the transition (e.g. “/ Display Error Message”) |
| Start state | Filled black circle | The initial state when the system begins |
| End state | Filled black circle inside a larger circle | A terminal state (the system stops) |
Notation for Transition Labels
Transitions are typically labelled in the form: event [condition] / action
- event – what triggers the transition (e.g. “Insert Card”)
- [condition] – optional guard condition that must be true (e.g. “[PIN correct]”)
- /action – optional action that occurs during the transition (e.g. “/ Dispense Cash”)
Example: ATM Machine
stateDiagram-v2
[*] --> Idle
Idle --> ReadingCard : Insert Card
ReadingCard --> AwaitingPIN : Card Valid
ReadingCard --> Idle : Card Invalid - Eject Card
AwaitingPIN --> MainMenu : PIN Correct
AwaitingPIN --> AwaitingPIN : PIN Incorrect, attempts lt 3
AwaitingPIN --> Idle : PIN Incorrect, attempts eq 3 - Retain Card
MainMenu --> ProcessingWithdrawal : Select Withdraw
ProcessingWithdrawal --> Idle : Sufficient Funds - Dispense Cash
ProcessingWithdrawal --> MainMenu : Insufficient Funds - Display Message
MainMenu --> Idle : Select Cancel - Eject Card
Example: Simple Traffic Light
stateDiagram-v2
[*] --> Red
Red --> RedAmber : Timer expires
RedAmber --> Green : Timer expires
Green --> Amber : Timer expires
Amber --> Red : Timer expires
When to Use State-Transition Diagrams
- Modelling systems that have clearly defined states (e.g. vending machines, login systems, games).
- Showing how a system responds to different events or inputs.
- Designing user interfaces where the screen changes based on user actions.
- Modelling communication protocols.
A state-transition diagram models the behaviour of a system by showing its possible states and the events that cause transitions between those states. Each transition may have an associated condition and/or action.
When drawing state-transition diagrams, always include a start state (filled circle) and label every transition arrow with the event that triggers it. If you forget to label transitions, you will lose marks. Remember: states are nouns/adjectives (e.g. “Locked”), transitions are events (e.g. “Enter Correct Code”).
Selecting Suitable Software and Hardware
Part of the design phase of the systems development life cycle is determining what software and hardware will be needed to implement the solution. This decision must be based on the requirements identified during analysis.
Selecting Software
When choosing software for a solution, the designer must consider:
| Factor | Questions to Ask |
|---|---|
| Functionality | Does the software provide all the features needed to meet the requirements? |
| Compatibility | Will it work with the existing hardware and other software in use? |
| Cost | Is it within budget? Consider licence type (single-user, site, subscription) and ongoing costs |
| Off-the-shelf vs bespoke | Can a ready-made package meet the requirements, or does custom software need to be developed? |
| Open source vs proprietary | Is open source appropriate, or is vendor support and guaranteed updates required? |
| Scalability | Will the software cope if the system needs to grow (more users, more data)? |
| Security | Does it meet the organisation’s data protection and security requirements? |
| Support and training | Is support available? Will staff need training, and is training provision available? |
| Platform | Must it run on a specific operating system or device (Windows, Linux, mobile)? |
Selecting Hardware
Hardware selection must ensure the system has sufficient resources to run the software and handle the expected workload:
| Component | Considerations |
|---|---|
| Processor (CPU) | Speed and number of cores — must be sufficient for the processing demands of the software |
| Memory (RAM) | Must be enough to run the OS, application software, and hold working data simultaneously |
| Secondary storage | Sufficient capacity for the database, files, and backups; consider speed (SSD vs HDD) |
| Network infrastructure | Bandwidth, reliability, and security of the network if the system is multi-user or internet-connected |
| Input devices | Appropriate for the users and environment (e.g. touch screens for a shop floor, barcode scanners for a warehouse) |
| Output devices | Appropriate for the output required (e.g. high-resolution monitors for CAD work, label printers for a logistics system) |
| Server vs client | Will data be held centrally on a server, or locally on client machines? |
Making the Recommendation
A design document should include a clear recommendation for hardware and software with justification linked to the requirements:
“The system requires a relational database to store student records. Microsoft SQL Server is recommended as the organisation already uses Windows Server infrastructure and has existing IT support expertise. A dedicated server with 16GB RAM and 1TB SSD storage is recommended to handle the expected volume of 10,000 student records and 200 concurrent users.”
In the exam, if asked to recommend software or hardware, always justify your choice with reference to the scenario. Do not just name a product — explain why it is suitable. For example: “A solid-state drive is recommended because the database requires fast random access to student records, and SSDs provide significantly lower access times than HDDs.”
Installation / Changeover Methods
When a new system is ready to replace an old one, the transition must be managed carefully. The choice of changeover method depends on the risk involved, the budget, the size of the organisation, and how critical the system is.
Direct Changeover
The old system is switched off and the new system is switched on immediately. There is no overlap.
| Advantages | Disadvantages |
|---|---|
| Quickest and cheapest method | Highest risk – if the new system fails, there is no fallback |
| Benefits of the new system are available immediately | No way to compare outputs between old and new systems |
| No duplication of effort | Data could be lost if migration fails |
| Clean break – no confusion about which system to use | Users must adapt immediately with no transition period |
Best suited for: Non-critical systems, or when the old system has completely failed and there is no alternative.
Parallel Running
Both the old and new systems run simultaneously for a period of time. Outputs from both systems are compared to ensure the new system is working correctly.
| Advantages | Disadvantages |
|---|---|
| Lowest risk – the old system is available as a fallback | Most expensive method (double the resources, staff, and effort) |
| Outputs can be compared to verify accuracy | Very demanding on staff who must operate both systems |
| Users can gradually build confidence in the new system | Confusing – staff may not know which system to trust |
| Data integrity can be verified | Takes longer to complete the changeover |
Best suited for: Critical systems where failure would be catastrophic, such as payroll, banking, or air traffic control.
Phased Implementation
The new system is introduced in stages or modules. One part of the system is replaced at a time, while the rest continues to use the old system.
| Advantages | Disadvantages |
|---|---|
| Lower risk than direct – only one module is at risk at a time | Takes a long time to fully implement |
| Problems in one module do not affect the whole system | Old and new modules must be compatible and interface correctly |
| Users can learn the new system gradually | Complex to manage the transition between modules |
| Lessons learned from early modules can improve later ones | Some functionality may be temporarily limited |
Best suited for: Large systems that can be logically divided into independent modules, such as a company-wide ERP system.
Pilot Running
The complete new system is implemented in one location, department, or branch of the organisation. If it works well, it is rolled out to the rest of the organisation.
| Advantages | Disadvantages |
|---|---|
| The new system is tested in a real environment on a small scale | The pilot group may not be representative of the whole organisation |
| If it fails, only a small part of the organisation is affected | The pilot group may have different needs or capabilities |
| Feedback from the pilot group can improve the system before wider rollout | Users not in the pilot group must wait for the new system |
| Reduces overall risk compared to a direct changeover across the whole organisation | Requires careful selection of the pilot group |
Best suited for: Organisations with multiple branches or departments, such as a retail chain testing a new point-of-sale system in one store before rolling it out nationwide.
Changeover Methods Comparison Summary
| Method | Risk Level | Cost | Speed | Best For |
|---|---|---|---|---|
| Direct | High | Low | Fast | Non-critical systems |
| Parallel | Low | High | Slow | Critical, high-risk systems |
| Phased | Medium | Medium | Slow | Large, modular systems |
| Pilot | Medium-Low | Medium | Medium | Multi-site organisations |
For the exam, be prepared to recommend a changeover method for a given scenario and justify your choice. Always consider: How critical is the system? What is the budget? How large is the organisation? What would happen if the new system failed? For example, a hospital patient records system would likely require parallel running due to the critical nature of the data, whereas a small business replacing a simple invoicing system might use direct changeover.
Purpose of Testing
Software testing is the process of executing a program with the deliberate intention of finding errors. Testing is a critical stage in the software development life cycle and is essential for producing reliable, high-quality software.
Why Testing is Important
- Identify defects before the software is released to users, reducing the cost of fixing errors later.
- Verify that the software meets the functional requirements specified during analysis and design.
- Validate that the software is fit for purpose and meets the needs of the end user.
- Improve quality and reliability, giving stakeholders confidence in the system.
- Prevent failures in live environments that could cause data loss, financial loss, or safety hazards.
- Ensure robustness – the software should handle unexpected inputs and situations gracefully without crashing.
Verification checks whether the software has been built correctly according to the specification (“Are we building the product right?”). Validation checks whether the software meets the actual needs of the user (“Are we building the right product?”).
Types of Testing
There are two main approaches to testing, distinguished by whether the tester can see the internal code.
Black-Box Testing
- The tester does not have access to the internal code or structure of the program.
- Testing is based entirely on the inputs and expected outputs defined in the specification.
- The program is treated as a “black box” – the tester only sees what goes in and what comes out.
- Also called functional testing because it tests whether the program functions correctly according to its specification.
When it is used:
- During acceptance testing by end users who do not know the code.
- When testing against the requirements specification.
- When testing external interfaces or APIs.
Advantages:
- No programming knowledge is required.
- Tests are based on user requirements, so they validate the system from the user’s perspective.
- Tester is unbiased – they do not know the code, so they are less likely to make assumptions.
Disadvantages:
- May miss errors in code paths that are not covered by the test cases.
- Cannot test internal logic or code quality.
- Difficult to design comprehensive test cases without knowing the code structure.
White-Box Testing
- The tester has full access to the internal code and structure of the program.
- Test cases are designed to exercise specific code paths, branches, and logic within the program.
- Also called structural testing or glass-box testing.
When it is used:
- During unit testing by the programmer who wrote the code.
- When trying to ensure all branches and paths through the code are tested.
- When looking for specific types of defects such as logic errors.
Advantages:
- Can ensure all code paths are tested (path coverage).
- Can identify dead code (code that is never executed).
- Can test internal logic and boundary conditions within the code.
Disadvantages:
- Requires programming knowledge and access to the source code.
- Tester may be biased – they may test the code the way they think it should work.
- Does not check whether the program meets user requirements (only that the code works as written).
Comparison
| Feature | Black-Box Testing | White-Box Testing |
|---|---|---|
| Code access | No | Yes |
| Based on | Specification/requirements | Internal code structure |
| Performed by | Testers / end users | Programmers / developers |
| Also known as | Functional testing | Structural / glass-box testing |
| Focus | What the program does | How the program does it |
| Tests paths through code | No | Yes |
Remember: Black box = can’t see inside = testing based on specification. White box = can see inside = testing based on code. A thorough testing strategy uses both approaches.
Levels of Testing
Testing is not a single activity but a series of tests performed at different stages of development. Each level builds on the previous one.
Unit Testing
- Tests individual components or modules of a program in isolation.
- Usually performed by the programmer who wrote the code.
- Each function, procedure, or module is tested independently with its own test data.
- Uses white-box testing techniques because the developer has access to the code.
- Goal: Verify that each unit of the software performs as designed.
Example: Testing a calculateVAT() function on its own with various input values to check it returns the correct results.
Integration Testing
- Tests how individual modules work together when they are combined.
- Checks that data is passed correctly between modules and that interfaces work as expected.
- May reveal issues with parameter passing, data formats, or timing.
- Goal: Expose faults in the interaction between integrated units.
Example: Testing that the calculateVAT() function works correctly when called by the generateInvoice() module and that the returned value is used correctly.
System Testing
- Tests the complete, integrated software system as a whole.
- Checks that the entire system meets all the specified requirements (functional and non-functional).
- Includes testing performance, security, usability, and compatibility.
- Goal: Evaluate the system’s compliance with its specified requirements.
Example: Testing the entire invoicing system end-to-end – from entering order details to generating and printing the final invoice.
Acceptance Testing
- The final level of testing, performed to determine whether the system is ready for release.
- Usually carried out by the end users or client, not the developers.
- Tests the system against the original user requirements and business needs.
- If the system passes acceptance testing, it is approved for deployment.
- Goal: Confirm that the system meets the business requirements and is acceptable for delivery.
Example: The client uses the invoicing system with real data for a trial period and confirms that it meets their needs.
Summary of Testing Levels
| Level | Who performs it | What is tested | When |
|---|---|---|---|
| Unit | Programmer | Individual modules/functions | During development |
| Integration | Development team | Modules working together | After unit testing |
| System | Testing team | Complete system | After integration testing |
| Acceptance | End user / client | System against user requirements | Before deployment |
Remember the order: Unit -> Integration -> System -> Acceptance. Think of it as building up from the smallest part (a single function) to the entire system being signed off by the client. Each level of testing can only begin once the previous level is complete.
Alpha and Beta Testing
These are two stages of testing that occur before a product is released to the general public.
Alpha Testing
- Performed in-house by the development team or a dedicated testing team within the organisation.
- Takes place in a controlled environment (the developer’s site).
- Aims to find bugs and issues before the software is given to external users.
- Both white-box and black-box techniques may be used.
- The software may still be incomplete or unstable at this stage.
Beta Testing
- Performed by a selected group of external users (beta testers) outside the development organisation.
- Takes place in the user’s own environment with real hardware and real usage patterns.
- Users report bugs, provide feedback on usability, and suggest improvements.
- The software is feature-complete but may still contain bugs.
- Feedback from beta testing is used to make final fixes and improvements before the official release.
Comparison
| Feature | Alpha Testing | Beta Testing |
|---|---|---|
| Who | Internal staff / developers | External users / public volunteers |
| Where | Developer’s site (controlled environment) | User’s site (real-world environment) |
| When | Before beta testing | After alpha testing, before final release |
| Purpose | Find bugs in a controlled setting | Find bugs in real-world conditions, gather user feedback |
| Software state | May be incomplete | Feature-complete but may have bugs |
Alpha testing is testing done in-house before release. Beta testing is testing done by external users in a real-world environment before the final release.
Test Plans and Test Tables
A test plan is a document that describes the testing strategy for a piece of software. It includes a test table which lists every individual test to be performed.
Structure of a Test Table
A test table typically includes the following columns:
| Column | Purpose |
|---|---|
| Test Number | A unique identifier for each test |
| Description | What is being tested and why |
| Test Data | The specific input data to use |
| Test Data Type | Normal, boundary, or erroneous |
| Expected Result | What the program should do with this input |
| Actual Result | What the program actually did (filled in after testing) |
| Pass/Fail | Whether the actual result matched the expected result |
Example Test Table: Password Validation
The password must be 8–20 characters long and contain at least one digit.
| Test # | Description | Test Data | Type | Expected Result | Actual Result | Pass/Fail |
|---|---|---|---|---|---|---|
| 1 | Valid password with digit | “Hello123” | Normal | Accepted | ||
| 2 | Valid password, exactly 8 chars | “Pass1234” | Boundary | Accepted | ||
| 3 | Valid password, exactly 20 chars | “Abcdefghij1234567890” | Boundary | Accepted | ||
| 4 | Too short (7 chars) | “Pass123” | Boundary | Rejected - “Too short” | ||
| 5 | Too long (21 chars) | “Abcdefghij12345678901” | Boundary | Rejected - “Too long” | ||
| 6 | No digit included | “Password” | Erroneous | Rejected - “Must contain a digit” | ||
| 7 | Empty string | ”” | Erroneous | Rejected - “Too short” | ||
| 8 | Only digits | “12345678” | Normal | Accepted |
A test plan outlines the testing strategy and contains a test table that specifies each individual test case, including the test data, expected results, and actual results.
The Need for Software Maintenance
Software maintenance is the process of modifying a software system after it has been delivered to the customer. It is an ongoing and essential part of the software development life cycle.
Why Software Needs Maintenance
- Bugs are discovered after release that were not found during testing. No testing process can guarantee that all defects have been found.
- User requirements change over time. Businesses evolve, and the software must adapt to meet new needs.
- The operating environment changes. New operating systems, hardware, browsers, or regulations may require the software to be updated.
- Users request improvements. As users become familiar with the software, they often identify features they would like added or enhanced.
- Security vulnerabilities are discovered that must be patched to protect users and data.
- Performance issues become apparent under real-world usage patterns that were not anticipated during development.
The Cost of Maintenance
Maintenance typically accounts for 60–80% of the total cost of a software system over its entire lifetime. This is far more than the cost of initial development. This makes it critical to write maintainable code from the outset using good programming practices (modularity, meaningful variable names, comments, documentation).
Software maintenance is the modification of a software product after delivery to correct faults, improve performance, or adapt the software to a changed environment. It is the longest and most expensive phase of the software development life cycle.
Corrective Maintenance
Corrective maintenance involves diagnosing and fixing errors (bugs) that are discovered after the software has been released. These are faults that were not detected during the testing phase.
Characteristics
- It is a reactive process – it happens in response to a problem being reported.
- The trigger is a bug report from a user or an error log from the system.
- It may involve fixing crashes, incorrect calculations, data corruption, or security vulnerabilities.
- Fixes are often released as patches or hotfixes.
Examples
- A user reports that the program crashes when they enter a date in a certain format. The developer identifies a parsing error and releases a patch.
- An online store calculates the wrong total when a discount code is applied to an order with more than 10 items. The logic error in the discount calculation is found and corrected.
- A security researcher discovers that the login system is vulnerable to SQL injection. A corrective patch is released urgently.
- The system fails to send email notifications when a user’s subscription is about to expire due to an incorrect date comparison.
Corrective maintenance is about fixing things that are broken. The key word is “error” or “bug”. If a scenario describes a program not working correctly, the maintenance required is corrective.
Adaptive Maintenance
Adaptive maintenance involves modifying the software to keep it working correctly in a changing environment. The software itself is not faulty – but the world around it has changed.
Characteristics
- It is a proactive or reactive process depending on whether the environmental change is anticipated.
- The trigger is a change in the external environment, not a bug in the software.
- The software’s functionality does not change – it is adapted so that the same functionality continues to work in the new environment.
Examples
- A new version of Windows is released and the software needs to be updated to maintain compatibility.
- The government changes the VAT rate from 20% to 25%. The software must be updated to use the new rate.
- A company migrates from on-premises servers to cloud-based infrastructure, and the software must be adapted to work in the new hosting environment.
- A web application must be updated to work with a new version of a web browser.
- New data protection legislation (e.g. GDPR) requires changes to how the software handles personal data.
- The hardware platform changes (e.g. from 32-bit to 64-bit processors) and the software must be recompiled or modified.
Adaptive maintenance is about adapting to change in the environment. The key words are “new operating system”, “new hardware”, “change in regulations”, or “new platform”. The software is not broken – it just needs to work in a changed environment.
Perfective Maintenance
Perfective maintenance involves making changes to improve the software or to add new features that were not part of the original specification. The software is working correctly, and the environment has not changed – the goal is to make the software better.
Characteristics
- It is a proactive process, often driven by user feedback or competitive pressure.
- The trigger is a user request for new functionality or an internal decision to improve performance.
- Can involve adding new features, improving the user interface, optimising performance, or improving code quality (refactoring).
Examples
- Users request a “dark mode” option for the interface. The developers add this feature.
- A report that previously took 30 seconds to generate is optimised to run in 2 seconds.
- An “export to PDF” feature is added to a word processing application.
- The user interface is redesigned to be more intuitive based on user feedback.
- A mobile app originally designed for phones is updated to support tablets with an optimised layout.
- Search functionality is enhanced to support filters and sorting options.
Perfective maintenance is about making something better or adding new features. The key words are “improve”, “enhance”, “new feature”, “optimise”, or “user requested”. The software is not broken and the environment has not changed – it is simply being improved.
Comparison of Maintenance Types
| Feature | Corrective | Adaptive | Perfective |
|---|---|---|---|
| Purpose | Fix bugs and errors | Adapt to environmental changes | Improve or enhance functionality |
| Trigger | Bug report / error detected | New OS, hardware, regulations, etc. | User request / business decision |
| Is the software faulty? | Yes | No | No |
| Has the environment changed? | No | Yes | No |
| Nature | Reactive | Reactive or proactive | Proactive |
| Urgency | Often high (especially for critical bugs) | Moderate (depends on deadline for change) | Low to moderate |
| Example | Fixing a crash when printing | Updating for Windows 12 | Adding an export feature |
| Another example | Correcting a wrong calculation | Adapting to new tax rates | Improving report loading speed |
Decision Flowchart for Identifying Maintenance Type
- Is the software producing incorrect results or crashing? Yes -> Corrective
- Has something outside the software changed (OS, hardware, laws)? Yes -> Adaptive
- Is the change to add a feature or improve performance? Yes -> Perfective
The three types of software maintenance are: Corrective (fixing bugs), Adaptive (adapting to environmental changes), and Perfective (improving or enhancing the software). In exam scenarios, identify the trigger to determine the type.
Factors Affecting Maintainability
Maintainability is a measure of how easy it is to modify, update, or fix a piece of software. Poorly written software is expensive and time-consuming to maintain.
Factors That Make Software Easier to Maintain
| Factor | How it Helps Maintainability |
|---|---|
| Modular design | Each module can be understood, tested, and modified independently without affecting other parts of the system |
| Meaningful variable names | Makes the code self-documenting – a maintenance programmer can quickly understand what each variable represents |
| Use of constants | Changing a value that appears in many places only requires one edit (e.g. changing VAT_RATE) |
| Comments and documentation | Internal comments explain the purpose of code sections; external documentation explains the overall system |
| Consistent coding style | Consistent indentation, naming conventions, and formatting make the code predictable and easier to read |
| Low coupling | Modules are independent – changing one module does not require changes to many others |
| High cohesion | Each module does one thing well, making it easier to understand and modify |
| Version control | Tracking changes allows maintainers to understand what was changed, when, and why |
| Proper testing | A comprehensive test suite means changes can be verified quickly (regression testing) |
| Avoiding global variables | Using local variables and parameters reduces unexpected side effects when code is changed |
Factors That Make Software Harder to Maintain
- Monolithic code – one large block of code with no modular structure.
- Poor or no documentation – the maintenance programmer has no guide to how the system works.
- Cryptic variable names – names like
x,a1,temp2give no clue about their purpose. - Hard-coded values – “magic numbers” scattered throughout the code are difficult to find and change consistently.
- Tight coupling – modules depend heavily on each other, so changing one module causes a cascade of changes.
- No version control – no history of changes, making it impossible to track or revert modifications.
- Original developer unavailable – if the person who wrote the code has left the organisation, their knowledge is lost.
When asked how to make software easier to maintain, always give specific practical examples. Do not just say “use comments” – say “use comments to explain the purpose of each subroutine and any complex logic, so that a maintenance programmer who did not write the original code can understand it.”
Backup and Recovery Procedures
The Need for Backup
Even with good security measures, data can be lost or corrupted due to hardware failure, human error, software bugs, or malicious attack. Regular backups are essential to ensure data can be recovered.
Types of Backup
| Type | What is copied | Restoration process | Speed to create | Speed to restore |
|---|---|---|---|---|
| Full backup | All data, every time | Restore from single backup | Slowest | Fastest |
| Incremental backup | Only data changed since the last backup (full or incremental) | Restore full backup + every incremental since | Fastest | Slowest (multiple sets needed) |
| Differential backup | All data changed since the last full backup | Restore full backup + latest differential only | Medium | Medium |
Backup Rotation: Grandfather-Father-Son (GFS)
The GFS scheme keeps three generations of backup:
- Son — most recent (e.g. last night)
- Father — previous (e.g. last week)
- Grandfather — oldest (e.g. last month)
Each new backup cycle promotes son → father → grandfather, overwriting the oldest. This ensures multiple recovery points without unlimited storage.
Off-Site and Cloud Backup
- On-site backups are vulnerable to the same local disaster (fire, flood) as the primary data.
- Off-site backups are physically stored at a separate location.
- Cloud backups use remote servers over the internet, providing geographic separation.
Recovery Procedures
A documented recovery procedure ensures that, after a failure, data can be restored quickly and correctly:
- Identify the cause of the failure and resolve it before restoring data.
- Select the appropriate backup (most recent clean copy).
- For incremental backup schemes: restore the last full backup, then apply each incremental backup in sequence.
- Use the transaction log (a record of all changes since the last backup) to replay any transactions that occurred after the backup was taken.
- Verify the restored data is complete and uncorrupted before returning the system to use.
A full backup copies all data. An incremental backup copies only data changed since the last backup. A differential backup copies all data changed since the last full backup. The GFS rotation scheme maintains three generations of backup.
The trade-off between backup types is creation time vs restoration time. Full backups take longest to create but are simplest to restore. Incremental backups are quickest to create but slowest to restore as multiple sets must be applied in order. Differential backups are a compromise. Always consider both sides of this trade-off in exam answers.
Technical Documentation
Technical documentation is written for programmers and IT professionals who need to understand, maintain, or modify the software. It describes the internal workings of the system.
Contents of Technical Documentation
| Component | Description |
|---|---|
| System overview | A high-level description of what the system does and its architecture |
| Data flow diagrams | Show how data moves through the system |
| Entity-relationship diagrams | Show the database structure and relationships between tables |
| Data dictionary | Lists all data items, their types, sizes, validation rules, and descriptions |
| Pseudocode / algorithms | Describes the key algorithms used in the system |
| Program listings | The annotated source code |
| File structures | Details of file formats, record structures, and field descriptions |
| Test plans and results | The test strategy, test data, expected results, and actual results |
| Installation guide | How to install and configure the system on new hardware |
| Known issues | A list of known bugs or limitations and any workarounds |
| Change log | A record of all changes made to the system since its initial release |
Why Technical Documentation is Important
- Enables maintenance programmers (who may not have written the original code) to understand the system.
- Provides a reference for debugging – programmers can look up data structures, algorithms, and expected behaviour.
- Ensures continuity – if the original developer leaves, the documentation preserves their knowledge.
- Supports adaptive and perfective maintenance by providing a clear understanding of the current system before changes are made.
Technical documentation is aimed at IT professionals and programmers. It describes the internal structure, design, and implementation of the software to support future maintenance and development.
User Documentation
User documentation is written for the end users of the software. It explains how to use the system without requiring any technical knowledge.
Contents of User Documentation
| Component | Description |
|---|---|
| Installation guide | Step-by-step instructions for installing the software |
| Getting started / tutorial | A guided walkthrough for new users to learn the basics |
| User manual | Comprehensive reference covering all features and functions |
| Frequently Asked Questions (FAQ) | Answers to common questions and problems |
| Troubleshooting guide | Solutions to common errors and issues |
| Glossary | Definitions of technical terms used in the documentation |
| System requirements | The minimum hardware and software needed to run the application |
| Contact information | How to reach technical support for further help |
Forms of User Documentation
- Printed manuals – physical books shipped with the software (less common today).
- Online help – built-in help system accessible from within the application (e.g. pressing F1).
- Video tutorials – recorded demonstrations of how to use features.
- Interactive guides – step-by-step walkthroughs embedded in the software interface.
- Knowledge bases – searchable online databases of help articles.
Differences Between Technical and User Documentation
| Feature | Technical Documentation | User Documentation |
|---|---|---|
| Audience | Programmers and IT professionals | End users |
| Purpose | Support maintenance and development | Help users operate the software |
| Content | Code, algorithms, data structures, system design | Instructions, tutorials, screenshots |
| Language | Technical, assumes programming knowledge | Non-technical, plain language |
| Examples include | Data dictionary, ERD, pseudocode, test plans | User manual, FAQ, tutorial, troubleshooting guide |
The exam may give a scenario and ask whether technical or user documentation is more appropriate. Technical documentation is for the IT team maintaining the system. User documentation is for the people using the system day-to-day. Both are important and serve different purposes.