Computer organization and architecture is that subset of computer study that acts as a backbone for modern computing. It involves the physical elements of a computer system as well as the theoretical principles guiding its design and operation. Understanding this field is important for you as studying computer architecture and organization. It will help you design better processors and improve software, making you more successful in your projects.
Computer Architecture and Organization are fundamental concepts in designing computer systems, each focusing on different aspects of system functionality and implementation.
Computer Architecture is used to characterize the user-visible attributes of a computer system. These are things such as addressing methods, instruction set, and data representation in terms of bits. These attributes directly affect how programs run since they define the abstract system. Computer architecture basically informs us about what the system does and gives us a framework for knowledge about the user-expected functionality and capabilities of the system.
While Computer Organization is concerned with the internal organization and functional components of a system. It involves the physical design of components as well as the interconnections that realize the architectural requirements. This aspect is concerned with the realization of the conceptual model formed by the architecture and defines how the system is realized. It includes the data path design, control units, and memory hierarchies, among other features, that constitute the efficiency and performance of the system.
Here are the 5 main components of the computer system. They are:
1. Motherboard
The motherboard integrates all the components together. It decides the overall design, size, and hardware component capability such as the CPU, RAM, and GPU. A faulty motherboard will make a computer unused.
2. Central Processing Unit (CPU)
It is also known as the computer's brain, the CPU executes instructions and handles information. Modern CPUs are usually multi-core, which enables them to handle multiple tasks simultaneously. An unreliable CPU can significantly affect performance.
3. Graphics Processing Unit (GPU)
This is where image, animation, and video display is handled. A gaming and graphics-capable high-end GPU is needed for this, to assist the CPU in providing a smooth experience. Malfunction of the GPU can cause display problems, including black screens.
4. Random Access Memory (RAM)
RAM holds temporarily data that is accessed by the CPU to execute current tasks. Additional RAM will improve performance, particularly when multitasking. Faulty RAM will result in slow-downs and crashes but will not totally shut down the computer.
5. Storage Device
This is where your data, operating system, and programs are stored. Typical ones are Hard Disk Drives (HDDs) and Solid-State Drives (SSDs). A faulty storage device can cause loss of data and slow loading and boot-up of the system and applications.
Digital logic and circuits are the building blocks of every computer system. They define how computers represent, process, and store information using electronic signals.
What is Digital Logic?
Digital logic is the set of rules and procedures for manipulating binary values (0s and 1s) in electronic circuits. Binary values are the form of all data and instructions that computers use.
Logic Gates
Logic gates are the basic building blocks of digital circuits. They each perform a basic logical operation (AND, OR, NOT, NAND, NOR, XOR, and XNOR) on one or more binary inputs and produce one single binary output. Combinations of gates make the devices that allow computers to perform complex decisions and calculations.
Boolean Algebra
Boolean algebra is a mathematical space for describing the operations and relationships of binary variables. It is the theoretical basis for developing and simplifying digital circuits and ensures that the circuit performs the intended logical functions as efficiently as possible.
Combinational Circuits
Combinational circuits are digital circuits whose output only depends on current inputs. Examples of combinational circuits are adders, multiplexers, encoders, and decoders. Combinational circuits can be used for arithmetic and route data using multiplexers, or change data formats using encoders or decoders.
Sequential Circuits
Sequential circuits are digital circuits where the outputs depend on both current inputs and previous states (history). Sequential circuits are memory elements (flip-flops) and can store information, which makes it possible to build counters, registers, and storage units. Sequential circuits are essential for all control logic and data storage.
Digital Systems
A digital system is a connected system of digital circuits that work together to perform functions. Computers, calculators, and digital watches are all examples of digital systems built upon these principles.
Understanding digital logic and circuits is essential for grasping how computers operate at the most fundamental level, providing the groundwork for more advanced topics in computer organization and architecture.
Modern processors execute programs by following a well-defined sequence of steps known as the instruction cycle. Understanding this cycle is fundamental to grasping how computers process information and manage the flow of operations.
What is the Instruction Cycle?
The instruction cycle, sometimes called the fetch-decode-execute cycle, is the process by which a processor retrieves, interprets, and carries out instructions from memory. This cycle ensures that each instruction in a program is executed in the correct order and with precise timing.
The Main Steps of the Instruction Cycle:
- Fetch: The instruction processor fetches the next instruction from memory; and uses the program counter to determine the next operation.
- Decode: The fetched instruction is decoded to determine what operation and what data/resources will be affected.
- Execute: The execution carries out the required operation, whether an arithmetic, logical, data move, or control execution change.
All instructions are fetched in this way in order to execute a program. This will form the basis of all program execution.
Control Flow Mechanisms
The control unit inside the CPU handles the instruction cycle to ensure the timing and sequencing of the diagram is intentional. The control unit outputs control signals to manage how data is moved, what components can operate together, and what order each operation occurs.
Timing and Control
Accurate timing is necessary to coordinate activities across a processor. The control unit may use:
- Hardwired control: Fixed logic circuits which create control signals.
- Microprogrammed control: An instruction set (microinstructions) which make control signals.
RISC vs CISC Architectures
The instruction cycle and control flow can differ between processor architectures:
- RISC (Reduced Instruction Set Computer): A RISC architecture emphasizes simple, uniform instructions that generally execute in a single cycle. This in turn allows for faster and more predictable control flow.
- CISC (Complex Instruction Set Computer): Supports more complex instructions that may require multiple cycles to complete, making control flow more intricate.
Why Instruction Cycle and Control Flow Matter?
Understanding of the instruction cycle and control flow is critical to realizing maximum processor performance, optimizing hardware design, and optimizing software writing. These topics are also the basis for more advanced topics such as pipelining and parallel execution.
Instruction Set Architecture (ISA) determines the boundary between hardware and software in a computer system. ISA determines what instructions can be performed by a processor, how the instructions should be communicated, and how memory and input/output device access is achieved. Understanding ISA is fundamental to understanding how programs interface with hardware and how various computer systems achieve compatibility and optimal performance.
Instruction Format
An instruction has a specific format within a computer that defines the mechanism to perform the operation, the data or addresses that might be referred to and any other control information. The limits of the structure of the formats will collectively define how efficiently a processor can execute programs.
Addressing Modes
Addressing modes indicate how to reference data with registers or memory. Immediate, direct, indirect, register, or indexed addressing are examples of common addressing modes. Addressing modes provide flexibility and efficiency when executing programs.
Assembly-Level Design
Assembly language offers a text-based representation of machine instructions as specified by the ISA. Assembly-level design is concerned with writing programs that make direct use of the processor's instruction set, allowing for tight control over hardware behaviour.
Microarchitecture and Instruction Set Architecture
While the ISA specifies what instructions are implemented by a processor, microarchitecture is a description of how the instructions are realized in hardware. Two processors may have the same ISA but with various internal layout and performance and efficiency.
Input/Output Synchronization
Processors must coordinate with input/output (I/O) devices, which may operate at different speeds. Synchronization can be:
- Synchronous: Data transfer is coordinated with a shared clock signal.
- Asynchronous: Data transfer occurs independently, often requiring handshaking protocols.
Bus Systems and Bus Arbitration
Bus systems are shared communication pathways connecting CPUs, memory, and I/O devices. Bus arbitration is the process of managing access to the bus, ensuring that only one device communicates at a time to prevent data collisions.
Direct Memory Access (DMA) and Controllers
DMA allows peripherals to perform moving data directly to or from memory with no continuous CPU intervention, which increases efficiency when lots of data are being transferred. The operations are controlled by DMA controllers like the 8257 and 8237, which support multiple modes of transfer.
Interrupts
Interrupts are impulses that momentarily interrupt the processor's ongoing process in order to address important tasks, including responsiveness to I/O activity. The mechanism supports responsive and efficient system behavior.
Programmable Peripheral Interface (PPI) 8255
The PPI 8255 is a widely used device that facilitates communication between the processor and peripheral devices, allowing flexible and programmable I/O operations.
A strong understanding of ISA is essential for system designers, compiler writers, and anyone interested in how software instructions are translated into hardware actions. It forms the backbone of compatibility, performance, and programmability in modern computer systems.
Inside the CPU, executing instructions involves a series of precise data movements and basic operations. These are managed through register transfers and micro-operations, which together form the foundation of all processing activities.
Register Transfer
Registers are small, fast storage units within the CPU that temporarily hold data and instructions. Register transfer refers to the process of moving data between these registers, often using dedicated buses or internal pathways. The rules and notation for specifying these transfers are known as Register Transfer Language (RTL), which provides a clear way to describe how data flows within the processor.
Micro-Operations
Micro-operations are the simplest operations performed on the data stored in registers. Each instruction in a program is broken down into a sequence of micro-operations, such as transferring data, performing arithmetic, or shifting bits.
Types of Micro-Operations
- Arithmetic Micro-Operations: Perform basic arithmetic calculations like addition, subtraction, increment, and decrement directly on register contents.
- Shift Micro-Operations: Move bits within a register to the left or right, supporting operations like multiplication, division, and data alignment.
- Logic Micro-Operations: Carry out logic functions such as AND, OR, XOR, and NOT on register data.
Data Transfers (Bus/Memory)
Data transfer between registers and memory or other components is coordinated via buses—shared pathways that enable communication. Efficient data transfer mechanisms are essential for high-speed processing and overall system performance.
Control Units: Hardwired vs. Microprogrammed
The control unit directs the sequence of micro-operations. There are two main types:
- Hardwired Control Unit: Uses fixed logic circuits to generate control signals, resulting in fast but less flexible operation.
- Microprogrammed Control Unit: Uses a set of microinstructions stored in memory, allowing for easier updates and more complex control sequences.
The arithmetic operations form the core of computer processing, making it possible to do everything from simple computations to intricate data analysis. Computer arithmetic is concerned with the processes and algorithms that the processor's Arithmetic Logic Unit (ALU) use to carry out these essential operations accurately and efficiently.
ALU Operations
The Arithmetic Logic Unit (ALU) performs all the logical and arithmetic operations in the CPU. It performs addition, subtraction, multiplication, division, and other logical operations.
Number Complements
To make the arithmetic operations simple, especially for subtraction and dealing with negative numbers, computers employ number complements:
- One's Complement: Flips all bits of a binary number.
- Two's Complement: Inverts all the bits and adds one, giving a convenient method of negation and subtracting by adding.
Negative Number Representation
Computers are used to represent negative numbers using complement systems (mainly two’s complement), and allowing the ALU to process both positive and negative values seamlessly.
Division Algorithms
Division algorithms in computer systems is complicated compared to addition or subtraction. Division is carried out efficiently in hardware level using algorithms such as restoring and non-restoring division.
Booth’s Method
Booth’s algorithm is an efficient technique for multiplying binary numbers, especially useful when dealing with signed numbers. It reduces the number of required operations, improving multiplication speed and efficiency.
Overflow Handling
Overflow is experienced when the outcome of an arithmetic operation is outside the range for the number of bits allocated to represent it. Mechanisms for the detection and processing of overflow are supported by the ALU for ensuring error-free computation.