A computer is an electronic device that receives raw data, processes it, and stores it as information (OBrien & Marakas, 2011). Due to the ability of computers to perform a wide range of activities, they have been incorporated in almost all devices, from simple ones such as remote controls and mobile phones to very complex ones such as robots and satellite monitoring systems. A computer consists of three parts: Arithmetic and Logic Unit (ALU), Central Processing Unit (CPU) and memory. The ALU performs arithmetic and logic operations, the CPU controls all the activities of the computer system, while memory acts as the storage of data and information (OBrien & Marakas, 2011). Peripheral computer devices include keyboards, mice, speakers, monitor screens, and the touchscreen. These peripheral devices ensure that information can be retrieved from an external device. Since the time digital computers were invented during the World War II, they have experienced rapid development in processing speed, power, storage and the versatility.
Since the 1950's, computers have advanced rapidly, both hardware and the software. The processing power has increased with a decrease in the physical size of the computer due to the use of integrated circuits. The storage capability of computers has increased from a few megabytes in 1950's to Terabytes of information at the moment. Experts in programming have developed software that can perform tasks very complicated tasks in fractions of a second. Generally, over the last 25 years, computers have become faster, smaller and well-suited to a wide range of activities. This huge advancement can be attributed to the development of concepts such as the RISC microprocessors, pipelining, cache memory and virtual memory. This essay examines computer technology evolution in the last twenty-five years, with a special focus on these concepts.
Reduced Instruction Set Computer (RISC)
A reduced instruction set computer (RISC) is defined as a microprocessor in which instruction set architecture allows performance of a smaller number of instruction types so that a computer can operate at high speeds (Hennesy & Patterson, 2006). In computers, each operation type performed usually requires an additional circuitry. Thus a computer that performs a large number of instruction types usually has a more complicated microprocessor which is cumbersome and slower in operation.
The evolution of the RISC concept has greatly influenced the design of microprocessors. One of the major considerations in designing involves the mapping of an instruction to the clock speed of a computer. Other vital considerations involve making the microprocessor architecture simple and making the microprocessor powerful without the assistance of the software.
Apart from the improvement in performance, incorporating RISC in microprocessor design has several other merits. It is easier and faster to test a microprocessor that is RISC designed due to the simplicity of the role performed by the microprocessor. Programmers for RISC-designed microprocessor find it easy to develop software for the microprocessor due to the use of a smaller instruction set. Its simplicity gives the designer more freedom to choose how to utilize the microprocessor's space. The microprocessors use high-level language compilers, and this leads to using a smaller set of instructions (Hennesy & Patterson, 2006). Currently, most RISC microprocessors utilize Harvard architecture model where instruction and data transfer are separated. This ensures that modifying the instruction set does not affect the data as the processor contains two different caches, one for instruction and the other for data.
RISC-designed microcontrollers have become common due to their efficiency, low cost and low power consumption. They are suitable for the low-power embedded system such as mobile phones, tablets, and gaming handheld controllers. The microcontrollers are also used in gaming consoles, servers, supercomputers, and workstations.
Pipelining
Pipelining can be defined as the process of queuing up instructions in a pipeline in order to ensure that instructions are stored and executed in an orderly manner (Baer, 2012). In pipelining, instructions are overlapped during the execution. In order to ensure that there is no mix-up during the execution, the process is divided into stages, and these stages are connected as a pipe where instruction enters at one end and exits on the other side after processing. The main aim of pipelining is to increase the throughput of a computer
With the significant evolution of the pipelining system, each segment consists of a register, which is followed by a combinational circuit. It is the role of the register to hold instruction while the combinational circuit performs operations on them. The output of the combinational circuit is then passed on to the holding register of the next segment for storage while awaiting execution by the segment.
Technological advancement has seen the evolution of two main types of pipelining: arithmetic pipelining and instruction pipelining (Baer, 2012). Arithmetic pipelining is used in most computers for floating points in arithmetic operations. Instruction pipelining increase the throughput of a system by overlapping the fetch, decode and execute segments of instruction. This enables a computer to execute multiple instructions at the same time.
Pipelining is popular due to its ability to reduce the cycle time of the processor, increased throughput and its improvement of a systems reliability. However, it increases the processors latency and makes it expensive to design and to construct a processor.
Cache Memory
Cache memory refers to a small, volatile high-speed memory that provides high-speed access to data to the processor (Chaplot, 2016). It also frequently accessed programs, data, and applications. Cache memory is typically the fastest accessible memory in a computer is usually strategically integrated into the motherboard and embedded into the processor. By storing the commonly used data, it means that when the processor requests the data, it does not need to be fetched from the main memory. Cache memory can be classified as either primary or secondary. Primary cache memory is usually the one closest to the processor. The memory can also be disk-cache whereby a disks section is reserved to provide access to frequently accessed data.
Over the past ten years, computers manufacturers have made the cache memory larger (Chaplot, 2016). However, larger cache memory means very high power consumption by the computer and a lot of heat which will reduce the lifespan of the chip. To increase the efficiency of the cache memory, chip producers are utilizing software to manage the cache memory. This enables the software to analyze the user patterns in order to move the applications up and down the cache memory.
Virtual Memory
Virtual memory refers to a memory management technique that maps memory addresses that are used by a program, into physical memories located in the main memory. It is a key part of modern computers whose integration requires hardware support through the incorporation of the memory management unit into the CPU (Chaplot, 2016). Virtual memory enables secondary memory to be addressed as if it is a part of the main memory. This memory management technique is implemented via the use of both the hardware and software. The management of virtual memory addresses and the assigning of physical memory to the virtual memory is performed by the operating system.
The main developments in the virtual memory over the past 25 years have enabled the ability to use more memory than the physically available space, increased data security due to memory isolation and freeing programs from the need of sharing a shared memory (Chaplot, 2016). It enables the computer to operate with less main memory. Its security and reliability make it more popular.
Currently, almost all virtual memory implementation utilize paging. In paging, the virtual address space is divided into pages of 4 kilobytes in size. The translation of virtual memory into physical memory is performed by page tables. Each entry onto the tables usually holds a flag showing whether the page is real or not. If the page table indicates it is not real, then the page fault exception is raised by the hardware to which invokes the page supervisor which then accesses the free secondary, returns and updates the page with the virtual memory address and then re-initiates the translation mechanism.
Conclusion
Recent advancement in computing has developed computers which can quickly perform tasks which would take years for man to accomplish. Todays computers have very high processing speed, high storage capacity, and even low power consumption when compared to their predecessors. This advancement has had both hardware and software aspects. The development and optimization of RISC-designed microprocessors have led to the development of very fast microprocessors. Pipelining has enabled computers to perform multiple processes at the same time. The cache memory has increased the speed of computer operations by providing a quickly accessed memory. Virtual memory has enabled computers to use more memory than the physically available memory to quickly accomplish tasks. With proper optimization of all these concepts, the computing power of future computers will continue to expand exponentially.
References
Baer, J. (2012). Microprocessor architecture. Cambridge: Cambridge University Press.
Chaplot, V. (2016). Cache memory: an analysis on performance issues. International Journal of Advanced Research in Computer Science and Management Studies, 4(7).
Hennessy, J., & Patterson, D. (2006). Computer architecture (1st ed.). Burlington, Mass: Morgan Kaufmann Publishers.
O'Brien, J., & Marakas, G. (2011). Management information systems. New York: McGraw-Hill Higher Education.
Request Removal
If you are the original author of this essay and no longer wish to have it published on the collegeessaywriter.net website, please click below to request its removal:
- Essay Sample on Network Management Tools
- Practices of Implementing PCI Compliance - Essay on Cybersecurity
- Case Study Example: UML Use Cases and Operating System/Computer Components
- Questions and Answers on Computer Gaming Company - Paper Example
- Coursework Sample on the History and Development of Computer Viruses
- The Virtual Environment - Essay Example
- Coursework Example on Security Management