Tuesday, July 1, 2008

Internet-2 - high technology somewhere near…

In recent years, development of the Internet has made considerable heights. Curve growth in the number of users worldwide network - eloquent proof. However, talk of further development of the Network falls in the context of extensive: Internet successfully developed and will evolve, however, the effectiveness of the use of its resources constantly decreasing. For scientific purposes, and some tasks, even existing megabitnyh velocities clearly not enough… One of the tasks for next-generation network, is to increase the speed of information transfer up to 10 Gbit/s. And this is just the beginning…


Internet. A brief digression

It all started in the distant 1962, when ARPA research on military applications of computer technology led by Dr. JCR Licklider, who proposed to use for these purposes interaction of government computers.
The main idea of this "experiment" was to build a network of functionally equal nodes, each of which would have some autonomy and would have its own reception blocks transmission of information processing, which would make the network even with sustained significant damage (not forget per year…)
Experiments to combine remote nodes were already in 1965, when computers were connected TX-2 Massachusetts Institute of Technology (MIT Lincoln Lab) and Q-32 corporations SDC (System Development Corporation) in Santa Monica. Interestingly, the exchange of packages between them at that time did not occur, the exchange has posimvolno.

In 1967, at a symposium ACM (Association for Computer Machinery) has sounded a plan to create a national network with packet data and are already in 1969, the Ministry of Defense approved the ARPANET (Advanced Research Projects Agency NETwork), as the lead organization for research on computer networks. The first node of the new network was UCLA - Test Center network, and soon it joined a number of leading research institutions in the country (Stanford University, University of Santa Barbara, University of Utah, etc.)

In early 1971, the network was already 15 knots: UCLA, UCSB, SRI, University of Utah, CWRI, BBN, MIT, SDC, RAND, Harvard Lincoln Lab., Stanford, UIU (C), NASA / A, CMU, together host 23.

In 1972, a group of graduate students, led by Stanford University professor Vintonom Kirfom (Vinton Cerf) has developed a number of protocols, turned later in TCP / IP. "I knew that the TCP / IP protocol will become international industrial standard used by millions of people - said V. Kirf in 1994 - this would be more than the 32 level, address space and would watch for high-speed Wednesdays at long delay "(Vinton G. Cerf. The Internet Phenomenon.). It was published specification Telnet (RFC 454). This year there is the first commercial version of UNIX, written by Sy. The success of UNIX surpassed all expectations.
First connect to the network ARPANET "outside" have been implemented in 1973, when the machines are connected to the network from England (University College of London) and Norway (Rogee Radar Establishment).

In 1982, the DCA and ARPA established as the basis for building a network of Internet Protocol (IP) and Trans-mission Control Protocol (TCP) and already on January 1, 1983 Department of Defense announced TCP / IP to their standard.

In 1990, ARPANET had ceased to exist and its functions moved to the NSFNET (National Science Foundation). By the network joined Austria, Belgium, Argentina, Brazil, India, Greece, Ireland, Switzerland, Chile and South Korea.
At CERN (Centre European pour la Recherche Nucleare), Tim Bernes-Lee (Tim Berness-Lee) has developed a World-Wide Web (WWW) - probably the most popular service in the history of the Internet. Naturally, the most popular among users WWW spared Telnet to the network and soon joined Algeria, Lebanon, Niger, Armenia, Bermuda, Burkina Faso, Jamaica, Lithuania, China, Colombia, Morocco, Masau, etc.

It's no secret that today's Internet has significant technical problems, get more from ARPANET:

1) It is not sufficiently large address space, restricted to 32 - bit address. As a consequence - the shortage of IP-addresses.

2) Low productivity and lack of automatic configuration addresses. It's no secret that algorithms fragmentation protocol IPv4 packets are imperfect and do not meet modern requirements: all the matter is that the division of packages routers, inadequately requires a large number of system resources. It turns out that the procedure packet fragmentation, with unreasonably large processing time, consumes more resources intermediate routers.

3) Today's Internet poorly suited for the transmission of audio and video in the mode of on-line. Voice traffic in the transfer of heavily distorted enough to talk about the poor as a transfer, as a result - a small window pischaschee…

4) Failure to implement the transfer of broadcasting over the Internet - all this affects the development of broadcasting on the Net…

5) The fifth paragraph in fact - only the fifth on the list and was sufficiently important, because it comes to the security of network protocols and features client-server interaction. Probably not in vain, there is even an entire classification of security threats Web-server…


Internet-2

In addition to the above, one of the main reasons for the project - Internet-2 is the problem of increasing efficiency in the use of the Internet (the so-called PSP-Problem Solving Potential), which is acute among the research environment: biologists, doctors, physicists, etc. It's no secret that for video in real time, existing capabilities of modern Internet is clearly not enough: a small box with sound smoothly rolling in the figure;), etc.

Internet-2, like the Internet, has created the USA. Actually, the need for a project of this kind have spoken in October 1996, when at a meeting in Chicago brought together representatives from 34 leading universities to seriously discuss vital.

As a result of the meeting - an agreement under which each of the participants in the project annually invests 500 thousand dollars. Already in 1997, became interested in the new project and supported his Bill Clinton, deciding to "merge" Internet-2 with an existing project NGI (Next Generation Internet). A couple of words about the NGI.

Unlike Internet-2 (closed draft), NGI is an open project, which is based on programme design and implementation of new network technologies with more advanced database and high bandwidth. The project was supported at the highest level: 10 October 1996, President Clinton and Vice President Gore have indicated that supported the initiative to establish NGI, which will be based on the latest developments of advanced held in federal agencies. The initiative is primarily aimed at supporting research and development in networking technologies and provide high-speed connections between major research centres dollars. As a technical basis NGI was brought high-speed backbone network vBNS.

It is clear that the use of high-speed channels required to develop instruments of service: data transmission protocols, administration - and such development is actively carried out. In addition, now being developed next-generation applications, serving all the possibilities of high-speed connections (videoconferencing, telemedicine, remote control physical experiments).


What is the Internet-2?

Internet-2 consists of so-called nodes GigaPoPs (gigabit-capacity points of presence). The number of such outlets already passes for 30 and constantly growing. GigaPoPs connected between a main highway - Internet2 Backbone Networks. These core points through fiber (the total length of optical fiber is 13 000 miles) connected universities, research centres and other agencies involved in the program. The total highway (it is the primary) is based on fibre-optic backbone network Abilene, which is estimated to cost 500 million dollars.


Abilene Network

The network was established in 1999, whereas stronghold channel network Abilene had a capacity 2.5 Gb / s. and intended for research on advanced networking technologies and programs. The name "Abilene" comes from a railway, tunnel in 1860 - years of near the town of Abilene in the state of Kansas. The network brings together more than 230 American universities, research centers and other institutions and that is something like the manager of the center, so-called The Network Operations Center (NOC), located on the territory of Indiana University. The network is funded by contributions of participants.

Initially, the overall line was developed by the order of University Corporation for UCAID (Advanced Internet Development). In the establishment of a network involved giants such as Cisco Systems, Qwest Communications, Nortel Networks and Indiana University.

The main feature of today Abilene Network, is the high speed data transmission. As already stated in the beginning, the rate was 2.5 Gbit / sec. In 2003 began the transition to standard OC-192c, with a rated speed of 10 Gb / s. The transition was completed on February 4, 2004 and is now a real speed is about 6-10 Gbit / sec. Routing of data packets carrying out special high-performance routers. Currently, a network jointly supported by Internet2 consortium, the University of Indiana, as well as companies "Qwest Communications", "Nortel Networks" and "Juniper Networks".

It is clear that the transition to new high-speed data transmission channels demanded adequate development of new routing protocols. One of the protocols Serving Set new generation, will be changed protocol IP: in the new IP removed the restriction on the number of IP-addresses. Modifying Protocol IP Version 4 (IPv4) was called IPv6 (IP version 6). The new protocol used 128 - Addressing bit instead of 32 - bit, and not difficult to estimate that the number theoretically connected to the network devices will now run 38 degrees… new protocol still good and that provides an opportunity to broadcast data transmission, the so-called multicast. What gives this technology? It is known that one channel for the transmission of sound in a radio broadcast via the Internet, spent some amount of bandwidth. Multiply this value by the number of users, is produced substantial "load" network. Under multicast, there is only one channel the flow of data (video, audio, etc.) on the general channel of communication for several clients. This is achieved by increasing the capacity of the network with all its consequences. By pluses include a new protocol and its great security and safety, and a new level of "quality" transmitted packages that provide a continuous adherence to the parameters of the network bandwidth and time carrying packages.

With regard to the compatibility of the "old" and "new" Internet, it is all simple: in the space of Internet 2 (IPv6) can see the Internet (IPv 4), but not vice versa. Although, in which case there is one of the ways to "look there": if one of the gateways IPv4/IPv6, graciously offered, for example, Hurricane Electric's IPv6 Tunnel Broker, then everything is possible ... To do so, just enough nothing: it is necessary to register to include in their support IPv6-stack and specially configured tunneling traffic. As for the applications of new generation network, then today there are a number of projects which are operating successfully in the new environment. Commenting on some of these details.

ResearchChannel - interesting project, supported by several universities, the main objective of which is to promote high-quality video and radio broadcasts via the Internet. Features: viewing on-line videos, news, the establishment of direct communication and interactive bridge between universities.

Berkeley Internet Broadcasting System (BIBS) - represents an interactive television system for distance learning. Originally designed for students of Berkeley. It supports a lot of concurrent programs, including video / audio and data flow. The system allows remotely conduct lectures, with a large number of users simultaneously.

Molecular Interactive Calloborative Environment (MICE) - Wednesday is a visualization of complex targeted for molecular analysis. The system allows for the on-line research to visualize different molecular mechanisms. The project - engine for Java3D.

Cave5D - is an electronic visualization lab. Allows you to remotely receive continuous visualization of different processes.

The Virtual Cell - emulates the environment in which students are studying features of the functioning of cells. Interaction - in real time.

Tele-immersion - is a tool for development of a virtual environment of space. Widely
used for video and teleconferencing.

The Informedia Digital Video Library - learning system that provides rich opportunities for education, information and entertainment purposes. The system can automatically detect video, sound, pictures.

As you can see, the prospects for more than impressive. Downloading music and movies with fantastic speeds, with the effect of live video presence online interaction project participants - all are feasible. Not far away, and interactive Counter Strike on the Internet;).

1) "Get Money for Clicks" NameDrive.com - Fastest Growing Domain Parking Company in the World.
2) Search your domain name wishing to have! FREE DOMAINS - yourname.co.cc

No comments:

Post a Comment