Part 1 Chapter 2: Communication and Internet Technologies
At the periphery of the Internet there are different types of network. Whenever networks of a different underlying technology need to communicate, the device needed is a gateway. Part of the functionality provided by a gateway can be the same as that provided by a router. One definition of a server is a specialised type of computer hardware designed to provide functionality when connected to a network. A server does not contribute to the functioning of the network itself but, rather, it is a means of providing services via the network. In the context of the Internet, a server may act as any of the following: • an application server (see Section 2.05) • a web server (see Section 2.05) • a domain name server (see Section 2.08) • a file server • a proxy server. KEY TERMS
Router: a device that acts as a node on the Internet Gateway: a device that connects networks of different underlying technologies Server: a device that provides services via a network
File server functionality is very often provided by what is called a ‘server farm’, in which a very large numbers of servers work together in a clustered configuration. Tier 1 content providers use server farms and they are also used in the provision of cloud storage, which an ISP can offer as part of its service portfolio. One example of the use of a proxy server is when a web server could become overwhelmed by web page requests. When a web page is requested for the first time the proxy server saves a copy in a cache. Then, whenever a subsequent request arrives, it can provide the web page without having to search through the filestore of the main server. At the same time a proxy server can act as a firewall and provide some security against malicious attacks on the server. Security is discussed further in Chapter 8 (Section 8.02).
2.05 Client–server architecture Following the arrival of the PC in the 1980s it was soon realised that the use of stand-alone PCs was not viable in any large organisation. In order to provide sufficient resource to any individual PC it had to be connected to a network. Initially servers were used to provide extra facilities that the PCs shared (such as filestore, software applications or printing). A further development was the implementation of what came to be known as the ‘client– server’ architecture. At the time, the traditional architecture of a mainframe computer with connected terminals was still in common use and the client–server approach was seen as a competitor in which networked PCs (the clients) had access to one or more powerful minicomputers acting as servers. The essence of the client–server architecture as it was first conceived is a distributed computer system where a client carries out part of the processing and a server carries out another part. In order for the client and server to cooperate, software called ‘middleware’ has to be present. This basic concept still holds in present-day client–server applications but the language used to describe how they operate has changed.
23