Ethical, Legal & Environmental Impacts

GCSE — Unit 1: Understanding Computer Science

Ethical impacts: Privacy and cybersecurity issues

Ethics in computing involves considering whether the use of technology is morally right or wrong, and how it affects individuals and society. Two major ethical concerns are privacy and cybersecurity.

Privacy issues

  • Surveillance — governments and organisations can monitor online activity, CCTV footage, and communications on a massive scale. This raises questions about whether constant monitoring is justified
  • Data collection — social media platforms, search engines, and apps collect vast amounts of personal data, often without users fully understanding what is being gathered or how it is used
  • Profiling — companies build detailed profiles of individuals based on their browsing habits, purchases, and social media activity, which can be used for targeted advertising or even discrimination
  • Right to be forgotten — individuals may want their personal data removed from the internet, but this can conflict with freedom of information
  • Location tracking — smartphones and apps can track a user’s physical location, raising concerns about stalking, surveillance, and the misuse of location data

Cybersecurity issues

  • Data breaches — large-scale hacks expose millions of people’s personal data, leading to identity theft and financial loss
  • Cyberwarfare — nations may use cyberattacks to disrupt critical infrastructure (power grids, hospitals, transport systems) in other countries
  • Ransomware on public services — attacks on hospitals and schools can endanger lives and disrupt essential services
  • Responsibility — there is debate about who is responsible when a data breach occurs: the company, the software developer, or the hacker?
  • Ethical hacking — some argue that hacking to expose vulnerabilities is justified if it leads to better security; others argue that any unauthorised access is wrong

Ethics — a set of moral principles that govern what is considered right and wrong behaviour. In computing, ethical issues arise when technology affects people’s rights, privacy, safety, or wellbeing.

Ethical questions in exams rarely have a single “correct” answer. You are expected to discuss both sides of the argument and support your points with examples. For instance, CCTV improves public safety but reduces personal privacy.


The Digital Divide

The digital divide — the gap between those who have access to modern technology and the internet and those who do not. It creates inequality in education, employment, and access to services.

Types of Digital Divide

Type Description
Economic divide Poorer individuals and families cannot afford devices, broadband, or mobile data
Geographic divide Rural and remote areas may lack broadband infrastructure
Generational divide Older people may lack digital skills or confidence
Skills/education divide Some people have not had the opportunity to learn digital skills
Disability divide Not all technology is designed to be accessible to people with disabilities

Consequences

  • People without digital access are excluded from online services — banking, healthcare, government services, job applications, and education
  • The divide reinforces existing inequalities — those who are already disadvantaged fall further behind
  • During the COVID-19 pandemic, the divide became more visible as those without internet access could not participate in remote learning or working from home

Bridging the Divide

  • Government investment in broadband infrastructure for underserved areas
  • Free or subsidised devices and internet access for low-income households
  • Digital skills training in schools, libraries, and community centres
  • Designing technology to be accessible to all users

AI, Automation and Ethical Concerns

Job Displacement

  • Automation and AI may replace many jobs in manufacturing (robots), transport (self-driving vehicles), retail (self-checkouts), and customer service (chatbots)
  • New jobs are created in software development, data science, cybersecurity, and AI research — but these require retraining

Bias in AI

  • AI systems learn from training data — if that data contains biases, the AI will reproduce them
  • Examples: facial recognition systems with higher error rates for certain ethnicities, hiring algorithms that discriminate by gender, credit scoring that disadvantages certain groups
  • This raises serious concerns about fairness and discrimination

Accountability

  • If an autonomous vehicle causes an accident, who is responsible — the manufacturer, the programmer, or the owner?
  • If an AI makes a wrong medical diagnosis, who is liable?
  • The “black box” problem — many AI systems cannot explain how they reached a decision, making it difficult to hold anyone accountable

When discussing AI ethics, consider both benefits and risks. AI can improve healthcare, reduce human error, and increase efficiency — but it also raises concerns about bias, accountability, and job losses. Always give specific examples.


Censorship and Freedom of Speech

  • Governments in some countries censor the internet, blocking access to certain websites, social media, and news sources
  • Social media platforms moderate content by removing posts that violate their policies (hate speech, misinformation, violent content)
  • Misinformation — false or misleading information spreads rapidly online, and there is debate about who should be responsible for stopping it
  • The balance between preventing harm and protecting free speech is a key ethical challenge — over-censorship can suppress legitimate views, while under-moderation can allow harmful content to spread

Social Media and Mental Health

  • Social media platforms use addictive design features (infinite scrolling, notifications, likes) to keep users engaged
  • Studies link heavy social media use to increased anxiety, depression, and loneliness, particularly among young people
  • Cyberbullying — social media provides a platform for harassment that can follow victims everywhere
  • FOMO (Fear of Missing Out) — seeing other people’s curated posts can create unrealistic comparisons
  • Echo chambers — algorithms show users content that reinforces their existing views, reducing exposure to different perspectives

Professional standards: Formal and informal codes of behaviour

People who work in the computing industry are expected to follow standards of professional behaviour, just like doctors or lawyers.

Formal codes of conduct

Professional bodies publish codes of conduct that set out the expected behaviour and responsibilities of computing professionals.

Examples include:

  • BCS (British Computer Society) Code of Conduct — requires members to act with integrity, respect confidentiality, and maintain professional competence
  • ACM (Association for Computing Machinery) Code of Ethics — emphasises honesty, fairness, and the responsibility to avoid harm
  • IEEE Code of Ethics — focuses on public safety, honesty, and avoiding conflicts of interest

Common principles in formal codes:

Principle Meaning
Public interest Prioritise the safety and wellbeing of the public
Integrity Be honest and transparent in professional work
Competence Only undertake work you are qualified to do; keep skills up to date
Confidentiality Protect sensitive data and respect client privacy
Professional development Continuously improve knowledge and skills
Responsibility Take accountability for the impact of your work

Informal codes of behaviour

Not all standards are written in official documents. Many are unwritten expectations within workplaces and the wider tech community:

  • Open-source contributions — sharing code and knowledge freely with the community
  • Responsible disclosure — reporting security vulnerabilities to the software vendor before making them public, giving them time to fix the issue
  • Respectful communication — behaving professionally in online forums, code reviews, and collaborative projects
  • Whistleblowing — speaking up when an employer or colleague is acting unethically, even if it is not formally required

If asked about professional standards, give specific examples of codes of conduct (e.g. BCS) and explain why they exist — to protect the public, ensure trust in the profession, and provide a framework for resolving ethical dilemmas.


Legislation: Security, privacy, data protection, freedom of information

Several UK laws govern how computers and data may be used. You need to know the purpose and key provisions of each.

Computer Misuse Act 1990

This law makes it illegal to gain unauthorised access to computer systems.

Three main offences:

  1. Unauthorised access to computer material — e.g. accessing someone else’s account without permission (up to 2 years imprisonment)
  2. Unauthorised access with intent to commit further offences — e.g. hacking into a bank system to steal money (up to 5 years)
  3. Unauthorised modification of computer material — e.g. deleting files, planting viruses, changing data (up to 10 years)

Data Protection Act 2018 (incorporating UK GDPR)

This law controls how organisations collect, store, process, and share personal data.

Key principles — personal data must be:

Principle Meaning
Lawful, fair, and transparent Collected legally with the person’s knowledge
Purpose limitation Used only for the specified purpose
Data minimisation Only the data that is needed should be collected
Accuracy Data must be kept accurate and up to date
Storage limitation Data must not be kept longer than necessary
Integrity and confidentiality Data must be kept secure

The 7th principle is accountability — the organisation must be able to demonstrate compliance with all the other principles.

Data subject rights — individuals have 8 rights under the DPA/GDPR:

Right Description
Right to be informed Told how their data is being used
Right of access Request a copy of the data held about them
Right to rectification Have inaccurate data corrected
Right to erasure Have their data deleted (“right to be forgotten”)
Right to restrict processing Request that their data is stored but not processed
Right to data portability Receive their data in a format that can be transferred to another service
Right to object Object to their data being used for certain purposes (e.g. direct marketing)
Right relating to automated decision-making Not be subject to decisions made solely by automated processing (including profiling) without human involvement

Penalties: Organisations that breach the DPA/GDPR can be fined up to £17.5 million or 4% of annual global turnover (whichever is greater). The Information Commissioner’s Office (ICO) enforces this law.

Freedom of Information Act 2000

  • Gives individuals the right to request information held by public authorities (government departments, the NHS, local councils, schools)
  • The public body must respond within 20 working days
  • Some information is exempt — for example, information that could harm national security or ongoing criminal investigations
  • Protects the intellectual property of creators — including software, music, images, and written works
  • It is illegal to copy, distribute, or modify copyrighted work without permission
  • Software piracy (copying and distributing software without a licence) is a criminal offence
  • Protects the rights of software developers to profit from their work

Regulation of Investigatory Powers Act 2000 (RIPA)

  • Gives certain public bodies (police, intelligence services) the legal power to carry out surveillance and intercept communications (phone calls, emails, internet activity)
  • Internet Service Providers (ISPs) can be required to give access to a customer’s communications data
  • Intended for national security, preventing crime, and protecting public safety
  • Controversial because it balances security against privacy — critics argue it enables mass surveillance

Communications Act 2003

  • Makes it an offence to send malicious, indecent, or threatening messages via electronic communications (email, social media, messaging apps)
  • Covers cyberbullying, online harassment, and sending offensive messages
  • Also regulates telecommunications services and broadcasting

Equality Act 2010

  • Requires organisations to ensure their digital services are accessible to people with disabilities
  • Websites and apps should be usable with screen readers, keyboard navigation, and other assistive technologies
  • Important for ensuring technology does not exclude people

Consumer Rights Act 2015

  • Gives consumers rights when purchasing digital content (software, apps, games, music, films)
  • Digital content must be of satisfactory quality, fit for purpose, and as described
  • Consumers can request a repair, replacement, or refund if digital content is faulty

Data Protection Act 2018 — the main UK law governing how personal data must be handled. It gives individuals rights over their data and imposes obligations on organisations that process it. Computer Misuse Act 1990 — the law that makes hacking and unauthorised access to computer systems a criminal offence.

You do not need to memorise the exact dates of legislation, but you should know the name, purpose, and key provisions of each act. A common question gives a scenario and asks which law has been broken — for example, an employee accessing a colleague’s email without permission breaks the Computer Misuse Act.


Environmental impacts of digital technology

The widespread use of digital technology has significant effects on the environment, both positive and negative.

Negative environmental impacts

  • Energy consumption — data centres that power cloud services, streaming, and the internet consume enormous amounts of electricity, much of which is generated from fossil fuels
  • E-waste — discarded electronics (phones, laptops, monitors) contain toxic materials such as lead, mercury, and cadmium. Much e-waste ends up in landfill or is shipped to developing countries
  • Manufacturing — producing electronic devices requires mining rare earth metals, which causes habitat destruction, water pollution, and significant carbon emissions
  • Planned obsolescence — some manufacturers design products to become outdated quickly, encouraging frequent replacements and increasing waste
  • Carbon footprint — every internet search, email, and streamed video generates carbon emissions through the energy used by servers, networks, and devices

Positive environmental impacts

  • Reduced travel — video conferencing and remote working reduce the need for commuting and business travel, lowering carbon emissions
  • Paperless offices — digital documents, email, and cloud storage reduce the need for paper, saving trees and reducing waste
  • Smart systems — smart thermostats, lighting, and energy management systems can reduce energy consumption in buildings
  • Environmental monitoring — sensors and satellites collect data on climate change, deforestation, and pollution, helping scientists and governments take action
  • Efficiency improvements — logistics software optimises delivery routes, reducing fuel consumption

What can be done

Action Description
Recycling e-waste Proper recycling recovers valuable materials and prevents toxic substances entering the environment
Using renewable energy Data centres and manufacturers can switch to solar, wind, or hydroelectric power
Extending device lifespan Repairing and refurbishing devices instead of replacing them
Energy-efficient design Designing hardware and software to use less power
Responsible disposal Following regulations for disposing of electronic equipment safely

Exam questions on environmental impacts expect balanced answers. Discuss both positive and negative effects, and suggest actions that individuals or organisations can take to reduce the negative impact of technology on the environment.