History of the Internet Part 2

0

 The first concepts of the Internet

The original ARPANET was added to the Web. The Internet is founded on the notion that several independent, arbitrary design networks will be available, starting with ARPANET as the pioneering packet switching network, but that they would soon encompass packet satellite networks, packet radio networks, and other networks. The Internet, as we are already aware, represents a crucial technical notion, notably the concept of open networking architecture. In this approach, a specific network architecture was not defined by the choice of individual network technology. Instead, a provider may freely pick and connect to each other through a meta-architecture. There was just one generic federating technique up to that time. This is the usual circuit change technique, in which networks join at the circuit level and send single bits synchronously via a part of a bottom-to-bottom circuit from two end-to-end points. Remember that Kleinrock demonstrated in 1961 a more efficient switching procedure for packets. Specific interconnection agreements between networks were another possibility, in addition to packet switching. Although separate networks had other limited means of connecting, they had to utilize one as part of the another, instead of serving as an end-to-end service peer.


A network of open architecture allows for the distinct creation and development of different networks and each network has its own unique interface, which it may give to users and/or other providers. Other Internet providers included. Each network may be customized to meet the network's particular user needs and environment. There is typically no limitation on the sorts or geographical extent of the network, however, some pragmatic factors govern what is sensible to offer.


Kahn initially presented the concept of open-architecture networking shortly after reaching DARPA in 1972. Initially, this effort was part of the packet radio programme but later became a distinct programme. The programme was at that point known as the "Internet." A dependable end-of-the-box protocol that could sustain efficient communication against jamming and other radio interferences or resist periodic blackout induced by a tunnel or obstructed local topography was key to the operation of the packet radio system. Kahn originally considered designing a protocol that was solely applicable to the packet radio network, in order to avoid having to deal with a plethora of different operating systems while still using NCP.


However, NCP did not have the capability of addressing networks (and machines) further downstream than a destination IMP on the ARPANET, thus some changes to NCP were necessary. (It was assumed that the ARPANET was not modifiable in this regard.) ARPANET was used by NCP to ensure end-to-end dependability. If any packets were lost, the protocol (and hence any programmes that used it) would come to a standstill. In this concept, NCP had no end-to-end host error control since the ARPANET was to be the sole network in existence and would be so dependable that no error control on the part of the hosts was necessary. As a result, Kahn decided to create a new version of the protocol that might fulfil the requirements of an open-architecture network environment. This protocol became known as the Transmission Control Protocol/Internet Protocol (TCP/IP). Whereas NCP was more of a device driver, the new protocol would be more of a communications protocol.

Kahn's early ideas were guided by four ground rules:


  • Each separate network would have to stand on its own, with no internal modifications necessary to link it to the Internet.


  • Communications would be done to the best of one's ability. If a packet does not get to its final destination, it is quickly retransmitted from the source.


  • To link the networks, black boxes would be utilised, which would eventually be referred to as gateways and routers. The gateways would store no information about the specific flows of packets going through them, keeping them simple and avoiding complex adaptation and recovery from numerous failure scenarios.


  • At the operations level, there would be no global control.


Other critical concerns that needed to be solved included:


  • Algorithms for preventing lost packets from permanently interrupting communications and allowing them to be successfully retransmitted from the source.


  • Allowing for host-to-host "pipelining," which allows numerous packets to be routed from source to destination at the discretion of the participating hosts if the intermediate networks permit it.


  • Gateway functions enable it to forward packets correctly. This includes reading IP headers for routing, managing interfaces, breaking down packets as needed, and so on.


  • The requirement for end-to-end checksums, packet reassembly from fragments, and detection of duplicates, if any.


  • The requirement for global addressing.


  • Techniques for controlling host-to-host traffic.


  • Interfacing with different operating systems.


  • Other issues were implementation efficiency and internetwork performance, but these were initially secondary considerations.


While at BBN, Kahn began working on a communication-oriented set of operating system principles, documenting some of his early ideas in an internal BBN memorandum titled "Communications Principles for Operating Systems." At this point, he recognised that he would need to understand the implementation details of each operating system in order to effectively integrate any new protocols. Thus, in the spring of 1973, after launching the internetting project, he contacted Vint Cerf (then at Stanford) to collaborate on the protocol's exact architecture. Cerf had been heavily involved in the initial NCP design and development, and he was already familiar with connecting to current operating systems. So, armed with Kahn's architectural approach to communications and Cerf's NCP knowledge, they collaborated to define what became TCP/IP.


The exchange of ideas was fruitful, and the first written version of the resultant method was issued as INWG#39 at a special meeting of the International Network Working Group (INWG) in September 1973 at Sussex University. A revised version was later released in 19747. The INWG was formed during the International Computer Communications Conference in October 1972, coordinated by Bob Kahn and others, and Cerf was requested to lead this group.


This cooperation between Kahn and Cerf yielded some fundamental approaches:


  • Communication between two processes would naturally be an extremely lengthy stream of data (they called them octets). Any octet's position in the stream would be used to identify it.


  • Sliding windows and acknowledgements would be used to manage the flow (acks). When acknowledging, the destination may choose when to do so, and each ack returned would be cumulative for all packets received up to that time.


  • It was unclear how the source and destination would agree on the windowing settings to be utilised. Initially, defaults were utilised.


  • Although Ethernet was being developed at Xerox PARC at the time, the ubiquity of LANs, let alone PCs and workstations, was not anticipated. The initial concept called for national-level networks such as ARPANET, of which only a limited number were projected to exist. As a result, a 32-bit IP address was used, with the first 8 bits indicating the network and the next 24 bits indicating the host on that network. When LANs first appeared in the late 1970s, it was obvious that this assumption, that 256 networks would be sufficient for the foreseeable future, needed to be reconsidered.


The original Cerf/Kahn article on the Internet envisioned a single protocol, TCP, that offered all of the Internet's transport and forwarding capabilities. Kahn intended for the TCP protocol to support a variety of transport services, ranging from completely reliable sequenced data delivery (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which could result in occasional loss, corrupted, or reordered packets. The original effort to build TCP, however, resulted in a version that only supported virtual circuits. This paradigm worked well for file transfer and remote login applications, but early work on sophisticated network applications, particularly packet voice in the 1970s, shown that in some situations, packet losses need not be rectified by TCP, but should be dealt with by the application. This resulted in the restructuring of the original TCP into two protocols: the basic IP, which simply supported addressing and forwarding of individual packets, and the separate TCP, which was concerned with service characteristics like flow control and packet recovery. For applications that did not require TCP services, an alternative known as the User Datagram Protocol (UDP) was created to offer direct access to the basic IP service.


Resource sharing was a key original motivator for both the ARPANET and the Internet, for example, allowing users on packet radio networks to use the ARPANET's time sharing services. Connecting the two was considerably more cost effective than replicating these very costly machines. While file transmission and remote login (Telnet) were major uses, electronic mail is likely to have had the most influence on the era's inventions. The email introduced a new way for individuals to interact with one another, and it altered the nature of cooperation, first in the development of the Internet itself (as mentioned below), and later for most of society.


Other applications proposed in the early days of the Internet included packet-based voice transmission (the forerunner of Internet telephony), different forms of file and disc sharing, and early “worm” programmes that demonstrated the idea of agents (and, of course, viruses). The Internet was not built for single use, but rather as a generic infrastructure on which new applications may be conceived, as demonstrated later by the creation of the World Wide Web. This is made feasible by the general-purpose nature of the services offered by TCP and IP.



Demonstrating the Concepts


DARPA awarded three contracts to Stanford (Cerf), BBN (Ray Tomlinson), and UCL (Peter Kirstein) to build TCP/IP (in the Cerf/Kahn article, it was simply named TCP but included both components). The Stanford team, lead by Cerf, developed the full specification, and within a year, there were three distinct TCP implementations that could communicate with one another.


This marked the start of a lengthy period of testing and research to expand and mature Internet concepts and technology. Beginning with the original three networks (ARPANET, Packet Radio, and Packet Satellite) and their respective research communities, the experimental environment has expanded to include virtually every type of network as well as a diverse research and development community. [REK78] Each growth has brought with it new problems.


TCP was first implemented for big time sharing systems such as Tenex and TOPS 20. Some people believed that TCP was too large and sophisticated to operate on a personal computer when it first emerged. David Clark and his MIT research group set out to demonstrate that a small and basic TCP implementation was achievable. They created a version for the Xerox Alto (an early personal workstation developed at Xerox PARC) and later for the IBM PC. That version was completely compatible with other TCPs, but it was customised to the application suite and performance goals of the personal computer, demonstrating that workstations, as well as big time-sharing systems, could be a member of the Internet. Kleinrock released the first book on the ARPANET in 1976. It emphasised the complexities of protocols as well as the problems they frequently create. This book was significant in disseminating the knowledge of packet switching networks to a large group.


The widespread development of LANS, PCs and workstations in the 1980s enabled the fledgling Internet to thrive. Ethernet technology, invented by Bob Metcalfe at Xerox PARC in 1973, is currently most likely the main network technology on the Internet, and PCs and workstations are the leading machines. The transition from a few networks with a small number of time-shared hosts (the original ARPANET model) to a large number of networks has resulted in a number of new concepts and changes to the underlying technology. To begin, it led to the creation of three network classes (A, B, and C) to handle the variety of networks. Class A represented huge national scale networks (a small number of networks with a large number of hosts), Class B regional scale networks, and Class C local area networks (a large number of networks with relatively few hosts).


As the Internet grew in size and complexity, so did the management challenges that accompanied it. To make it easier for users to use the network, hosts were given names instead of numeric addresses, so they didn't have to memorize them. Because there were initially a very small number of hosts, it was possible to keep a single database containing all the hosts and their associated names and addresses.

The transition to having a large number of separately maintained networks (e.g., LANs) meant that maintaining a single table of hosts was no longer practical, and Paul Mockapetris of USC/ISI created the Domain Name System (DNS). The DNS protocol enabled a scalable distributed system for converting hierarchical hostnames (for example, www.acm.org) into Internet addresses.


The growing scale of the Internet put the routers' capabilities to the test. Originally, there existed a single distributed routing method that was applied consistently by all Internet routers. As the number of networks on the Internet grew, this first architecture could not scale as needed, so it was replaced with a hierarchical routing model, with an Interior Gateway Protocol (IGP) used inside each area of the Internet and an Exterior Gateway Protocol (EGP) used to connect the regions.

This architecture allowed various areas to employ a distinct IGP, allowing for varied costs, quick reconfiguration, robustness, and scalability need to be met. Not only the routing method but also the size of the addressing tables emphasized the routers' capability. To control the size of router tables, new techniques to address aggregation, particularly classless inter-domain routing (CIDR), have recently been proposed.


One of the key difficulties, as the Internet grew, was how to transmit changes to software, particularly the host software. DARPA provided funding to UC Berkeley to study changes to the Unix operating system, including the incorporation of TCP/IP created at BBN. Although Berkeley eventually modified the BBN code to fit more effectively into the Unix system and kernel, the inclusion of TCP/IP in the Unix BSD system releases proved to be a crucial component in the dissemination of the protocols to the research community. For their day-to-day computer environment, many of the CS research community began to utilize Unix BSD. Looking back, one of the major components in the successful general acceptance of the Internet was the approach of integrating Internet protocols into a supported operating system for the research community.


Three years earlier, in 1980, TCP/IP was accepted as a defensive standard. This allowed the defense to begin sharing the DARPA Internet technological base, which directly led to the ultimate separation of the military and non-military populations. ARPANET was being utilized by a large number of defense R&D and operational entities by 1983. ARPANET's conversion from NCP to TCP/IP enabled it to be divided into a MILNET for operational needs and an ARPANET for research reasons.



 History of the Internet 3



Post a Comment

0Comments
Post a Comment (0)

#buttons=(Accept !) #days=(20)

Our website uses cookies to enhance your experience. Learn More
Accept !