Software and Programming Languages
Software and Programming Languages
Software is the general term used to describe the set of programs. Software consists of programs, routines and procedures, together with associated documentations, that can run a computer system. So, software is the word used for the actual programs that allow the hardware to do a useful job. Without software, hardware is useless. Software is made up of a series of instructions that tell the computer what to do.
Since the software of a computer system is the various sets of instructions which tell the systems what to do, and how to do things, these sets of instructions are collected together in workable groups known as programs. Thus, without programs of instructions, computers would not be able to function because they would not what to do.
Importance of computer software to computer systems
A computer system requires a layer of software that enables users to operate it without having to know about the underlying processing that are going on all the time inside. This includes the Operating System and other forms of System Software.
A computer needs separate instructions for even the most elementary tasks. Computer users are interested in solving their problems without having to program every detail into the computer. The System Software is provided by the manufacturer and enables the user to give a few simple instructions which the system software translates into the minor operations needed for the computer to function in an easy-to-use way.
Every computer is provided with an operating system that controls the vital parts of the computerâ€™s operations, using keyboards, screen displays, loading and saving files and printing are some examples.
The Earliest Computers
As we enter the 21st century, computers have been around for less than 60 years. The Enigma Code was used by the Germans to code all their most secret messages and the Enigma Code can be broken down with the aid of the first programmable computers, named Colossus, built for this purpose in 1943.
Programming languages are the means of generating the software that makes the computer work. A computer works by executing a program that is following a sequence of instructions. This is held in memory as electronic patterns, known as machine code. The programmer starts with a design of what the program is intended to do, or algorithm, and then writes it in a programming language.
The written program is known as the Source Code and is translated into Object Code â€“ Machine Code.
Languages have a grammar, known as syntax, which states the rules of the language. This makes it possible to recognize â€˜wrongâ€™, ungrammatical uses of language to avoid spending time and effort trying to understand something that has no meaning. The syntax of a language is not simply a collection of simple and â€˜correctâ€™ uses â€“ it is a set of rules that allows programmers to combine elements of the language into a statement and know it is acceptable to the computer. Statements in languages also have meanings (or semantics). One important difference between human and computer languages is that computer languages are never ambiguous â€“ a statement always has a meaning, even if it is not the meaning that the programmer intended.
Types and Generations of Languages
- Machine Language or Machine Code â€“ First Generation.
Programming languages are often characterized by â€˜generationâ€™. The first generation of computer language, known as Machine Code, executes directly without translation. Machine Code is the actual pattern of 0s and 1s used in a computerâ€™s memory. The programming of Colossus and other early computers was laboriously done with toggle switches representing a pattern of binary codes for each instruction.
Machine language, however it is entered into a computer, is time-consuming, laborious and error-prone.
Machine Code is the set of all possible instructions made available by the hardware design of a particular processor. These instructions operate on very basic items of data, such as bytes or even single bits. They may be given memorable names in the associated documentations, but they can be understood by the computer only when expressed in binary notation. Hence, machine code is very difficult to write without mistakes. In practice, machine-code programming is achieved by the programmer writing in assembly language.
- Assembly Language â€“ Second Generation
In the 1950s, when computers were first used commercially, machine code gave way to assembly code which allowed programmers to use mnemonics (abbreviations that represent the instructions in a more memorable way) and denary numbers (that is 0 to 9), instead of 0s and 1s.
Programs written in assembly languages have to be translated into machine code before they can be executed, using a program called assembler. There is more or less a one-to-one correspondence between each assembly code statement and its equivalent machine code statements, which means that programs can be written in the most efficient way possible, occupying a little space as possible and executing as fast as possible. For this reason, assembly code is used for applications where timing and storage is critical. Assembly languages are called low-level languages because they are close to machine code and the detail of computer architecture.
Since different types of computers have different instruction sets which depend on how the machine carries out the instructions, both machine code and assembly code are machine-dependent. Each type of computer will have its own assembly language.
Assembly language is related to the computerâ€™s own machine code. Instead of writing actual machine-code instructions, which would typically need to be entered in binary or hexadecimal, the assembly language programmer is generally able to make use of descriptive names for data stores and mnemonics for instructions. The program is then assembled, by the software known as an assembler, into the appropriate machine-code instructions.
- Imperative High-Level Language â€“ Third Generation
As computer use has increased dramatically in the 1950s, the need grew to make it easier and faster to write error-free programs. Computer manufactures and user groups started to develop so-called high-level languages, such as ALGOL (ALGOrithmic Languages) and FORTRAN (FORmula TRAnsalation). In the 1950s, most of the people actually writing programs were scientists and engineers and so both these languages were created to be used in mathematical operations.
COBOL (Common Business Oriented Language) was invented by the redoubtable Admiral Grace Hopper in 1960, specifically for writing commercial and business programs, rather than scientific programs. Whilst serving in the US Army in 1947, Grace Hopper was investigating why one of the earliest computers was not working and discovered a small dead moth in the machine. After removing it, the machine worked fine, and from then on computer errors were known as â€˜bugsâ€™.
So, imperative language is a language in which the programmer specifies the steps needed to execute the program, such as program statements, declarations and control structures and the order in which they are carried out. Most early programming languages were imperative.
Current high-level languages generally have features that would enable them to be used as procedural languages.