Understanding the Basics of Computers
The basic architecture of a computer
A computer is any system that automatically processes data. There have been several different architectures that such systems have followed, especially in the early days of computing. However, since the early 1950s the vast majority of computers have been built according to a system known as the von Neumann architecture. In fact, this architecture is now so ubiquitous that we can actually treat it as synonymous with ‘computer architecture’.
This system is composed of several key elements: the central processing unit (CPU), memory, input/output (I/O) devices, and storage. Each of these components plays a crucial role in the overall functionality of the computer.
The CPU, often referred to as the brain of the computer, is responsible for executing instructions. It performs arithmetic and logical operations, controls the timing of operations, and manages data flow within the system. It performs a specific set of actions on the data it receives, depending on the program that is loaded in the memory.
The CPU itself is made up of several subcomponents. These can vary, but they generally include the arithmetic logic unit (ALU), and the control unit. The ALU handles mathematical calculations and logical operations, while the control unit directs the operation of the processor by fetching instructions from memory, decoding them, and then executing them.
Another key component of the computer's basic architecture is memory. A specific kind of memory called RAM is essential here. This stands for Random Access Memory. You can think of it like the computer's short term memory. It holds whatever data and instructions are needed for the CPU to process as part of the operation. Once that operation is completed, the RAM will stop holding that data. There are other parts of the computer that serve as long-term memory - we'll cover those later.
Input devices are the means by which information is fed into the CPU for processing. In a modern computer there are many input devices involved in inputting that information, such keyboards, mice and touchscreens.
Output devices are where the data that has been processed is sent to. This could perhaps be the user’s screen, speakers, or more specialised features like warning lights and haptic feedback. It’s basically the mechanism for pushing out whatever the result of the process is.
So, those are the basic components at the heart of a computer. These are housed on a motherboard, the main circuit board that houses the CPU, memory, and other essential components. It provides the electrical connections that allow these components to communicate with each other.
The motherboard also includes the system bus, a communication pathway that transfers data between the CPU, memory, and other peripherals. The speed and efficiency of the system bus can significantly impact the overall performance of the computer.
Power supply is another vital aspect of computer architecture. It converts electrical alternating current from an outlet into direct current (DC) to make it usable for the computer's components. That is because logic gates, a key component for computer processing, can only run on DC. It also adjusts the voltage to a lower level, in order to work with the components. Without a reliable power supply, the computer would not be able to function.
The components we’ve discussed so far – input devices, CPU, memory, output devices, and power supply – are the minimum possible components needed for a standard modern computer. But of course, most computers contain many more components than this.
These include cooling devices such as fans and heat sinks, which prevent more powerful systems from overheating. Then there are graphics cards, sound cards, ports, batteries, screens, keyboards, and many more components that go into making the computer usable and effective. But these are all supplementary to that core architecture – a CPU, memory, input and output.
Transistors
When you look at the basic architecture of the computer, as we did in the last orb, you can see that data is inputted into the CPU, where it is processed, and an output is produced.
But how does this happen? How does a machine execute commands on a piece of data? The answer is: it’s complicated. But we’re going to break it down slowly so that we can really understand the components that go into processing data.
The first building block that we need to get our heads round is the idea of a transistor. A transistor is a tiny electronic device that acts as a switch or an amplifier for electrical signals. It's made from semiconductor materials, typically silicon.
Remember how, in a Turing machine, data would be represented as 1s or 0s? Well, transistors act as switches that can either be on or off. A 1 is represented by a transistor that is on, and a 0 is represented by a transistor that is off.
The key feature of a transistor is its ability to control the flow of electricity. It has three terminals: the base, collector, and emitter. By applying a small electrical charge to the base, you can control whether electricity flows from the collector to the emitter. This simple on/off functionality is what makes transistors so powerful in digital systems.
In the image below, you can see a diagram of a classic Bipolar Junction Transistor (BJT). The ‘B’ stands for ‘Base’, the ‘C’ for ‘Collector’ and the ‘E’ for ‘Emitter’.
As you can see from the direction of the arrow, the current flows from the Collector to the Emitter. But it needs to pass through the Base in order to get there. The Base functions like a gate that will either cut the power off or allow it flow through. The state of the Base (whether it is open or closed) depends on whether it has a current being applied to it from its separate power source (indicated by the line sticking out to the left).
So, if we put a separate current through B, a current will be able to flow from C to E. If we turn the current off at B, no current will flow from C to E.
Each switching operation from the on to the off state requires a little current that leads to electric losses. These losses are dissipated as heat - that's why processors and phones get hot when they are processing something.
So, a transistor is a gate that allows us to control whether a current can pass through, and is controlled by a separate current. By switching the power on or off on ‘B’, we can control whether a separate current is flowing from ‘C’ to ‘E’.
And that, in a nutshell, is what is happening inside of a CPU. Except instead of there being just one transistor, there are millions or even billions of microscopically small ones. One transistor will receive either a current, or no current, and the resulting current will dictate whether another transistor transmits current or not, which will control another transistor, and so on. Across a vast array of transistors, this system can be used to represent and process complex data.
A really crucial part of how this process works is the arrangement of the transistors. Transistors are organized into smart formations called digital circuits. The most common type of digital circuit is called a logic gate, which we will discuss in the next orb.
Logic gates
Computers work by applying current through transistors according to a set of rules. Using a huge number of transistors which turn on (1) or off (0), we can transmit and process any data we like, using binary notation.
But how does this actually make changes to that data? How do computers perform arithmetic, or logical checks like whether a condition is true or not? Let’s dig a bit deeper into the processing part of the process.
As we’ve discussed, the on or off states of transistors are used to represent 1s and 0s. Processing of that data is achieved by arranging many transistors into a particular shape. These shapes are called digital circuits, and the most important form of digital circuit is a logic gate.
Logic gates are smart ways of arranging transistors so that they perform logical checks on their inputs. For example, there is an AND gate, which is designed to check whether two separate inputs are both 1s. We input either a 1 or a 0 into two different inputs, and the gate will output a 1 if they are both 1s, or a 0 if they are anything else.
This functionality can be achieved quite simply by arranging transistors in the pattern below.
In order to check whether the inputs ‘A’ and ‘B’ are both 1s, we simply send a current from point ‘V’ to the output points at the bottom. Remember that the transistor will only allow a current through it if another current is being applied to it (from points A and B, respectively). So, in order for the current to be received at the output point, there must be a current flowing through both A AND B, meaning they are both 1s.
Rather than drawing the circuit above every time we want to indicate an AND gate, electrical engineers use the following symbol, which means exactly the same thing:
In addition to AND gates, there are several other logic gates that are essential to the basic logical operations performed inside a computer. The main ones are OR, NOR, NOT, NAND, XOR and XNOR. They are represented using the symbols below:
As well as the symbols used to represent the different gates, this table shows the functionality of each of the logic gates in the ‘Rule’ column. The numbers on the left are the input signals, and the number on the right is the output. So you’ll see for the AND gate, if the two inputs are 0 (marked as ‘00’), then the output will be 0. If input A is 1 and input B is 0 (marked as ‘10’), then the output will also be 0, and so on.
Based on that, you should be able to figure out what all of these different gates do. The OR gate outputs a 1 if either the A or B inputs are 1s. The NAND gate is short for NOT AND. As you can see, it does the opposite of the AND gate, outputting a ‘1’ for any inputs except for ‘11’.
The NOR gate does the opposite of the OR gate – only outputting a ‘1’ if there is a ‘00’ input. The XOR gate can be thought of as a strict OR gate – it only outputs a 1 if one of the two inputs is a ‘1’, not if both the inputs are a ‘1’. Lastly, the XNOR gate does the inverse of the XOR gate, returning a ‘1’ only if the inputs are ‘11’ or ‘00’.
The NOT gate is different to all of these, because it only takes one input. It works by outputting the opposite of whatever is put in. If it is an input of ‘1’, it outputs ‘0’, and vice versa.
By combining many of these logic gates together, a CPU is able to perform complex logical checks and operations to process data.
If you remember the example we used of the Turing machine at the start of this pathway, the human performed simple checks on 1s and 0s to execute a more complex operation on some data. Logic gates, these smart combinations of transistors, are how computers perform such simple checks. By combining these simple checks across massive numbers of logic gates and transistors, they can perform computations at a massive scale and speed.
Logic gates are not an easy concept to get your head around, especially when we think about them working in combination to perform all the complex processing that computers are capable of. We could go into more detail about this, but unless you are doing advanced computer science, it’s probably enough to understand them at the level that we’ve described.
In addition to logic gates, there are also other kinds of digital circuits. Memory cells are tiny combinations of transistors that can store ‘memories’. Each memory stores one bit (meaning a single 1 or 0). Like logic gates, they are made up of simple combinations of transistors. Memory cells simply record a single 1 or 0, until their state is changed. Memory cells are organized into larger structures, such as registers and memory arrays, which provide the storage capacity needed for computer programs and data.
Digital circuits also include various control and timing elements, such as clocks and flip-flops. Clocks generate regular pulses that synchronize the operation of different parts of the circuit, ensuring that data is processed in a coordinated manner. Flip-flops, on the other hand, are used to store and transfer data between different stages of a circuit, acting as temporary holding cells that help maintain the flow of information.
Like logic gates, these are essential pieces in the puzzle of how computers perform operations on data.