Results 1 - 10
of
62
Domain names - Implementation and Specification
- RFC-883, USC/Information Sciences Institute
, 1983
"... This RFC describes the details of the domain system and protocol, and assumes that the reader is familiar with the concepts discussed in a companion RFC, "Domain Names- Concepts and Facilities " [RFC-1034]. The domain system is a mixture of functions and data types which are an official pr ..."
Abstract
-
Cited by 725 (9 self)
- Add to MetaCart
This RFC describes the details of the domain system and protocol, and assumes that the reader is familiar with the concepts discussed in a companion RFC, "Domain Names- Concepts and Facilities " [RFC-1034]. The domain system is a mixture of functions and data types which are an official protocol and functions and data types which are still experimental. Since the domain system is intentionally extensible, new data types and experimental behavior should always be expected in parts of the system beyond the official protocol. The official protocol parts include standard queries, responses and the Internet class RR data formats (e.g., host addresses). Since the previous RFC set, several definitions have changed, so some previous definitions are obsolete. Experimental or obsolete features are clearly marked in these RFCs, and such information should be used with caution. The reader is especially cautioned not to depend on the values which appear in examples to be current or complete, since their purpose is
Development of the Domain Name System
- In Proc. ACM SIGCOMM
, 1998
"... (Originally published in the Proceedings of SIGCOMM ‘88, ..."
Abstract
-
Cited by 242 (1 self)
- Add to MetaCart
(Originally published in the Proceedings of SIGCOMM ‘88,
With Microscope and Tweezers: An Analysis of the Internet Virus of November 1988
- in Proceedings of 1989 IEEE Symposium on Research in Security and Privacy
, 1998
"... In early November 1988 the Internet, a collection of networks consisting of 60,000 host computers implementing the TCP/IP protocol suite, was attacked by a virus, a program which broke into computers on the network and which spread from one machine to another. This paper is a detailed analysis of th ..."
Abstract
-
Cited by 127 (0 self)
- Add to MetaCart
(Show Context)
In early November 1988 the Internet, a collection of networks consisting of 60,000 host computers implementing the TCP/IP protocol suite, was attacked by a virus, a program which broke into computers on the network and which spread from one machine to another. This paper is a detailed analysis of the virus programitself, as well as the reactions of the besieged Internet community. We discuss the structure of the actual program, as well as the strategies the virus used to reproduce itself. We present the chronology of events as seen by our team at MIT, one of a handful of groups around the country working to take apart the virus, in an attempt to discover its secrets and to learn the network’s vulnerabilities. We describe the lessons that this incident has taught the Internet community and topics for future consideration and resolution. A detailed routine by routine description of the virus program including the contents of its built in dictionary is provided. 1
A Weak-Consistency Architecture for Distributed Information Services
- Computing Systems
, 1992
"... services ..."
(Show Context)
A Comparison of Hashing Schemes for Address Lookup in Computer Networks
, 1989
"... The trend toward networks becoming larger and faster, and addresses increasing in size, has impelled a need to explore alternatives for fast address recognition. Hashing is one such alternative which can help minimize the address search time in adapters, bridges, routers, gateways, and name servers. ..."
Abstract
-
Cited by 53 (1 self)
- Add to MetaCart
The trend toward networks becoming larger and faster, and addresses increasing in size, has impelled a need to explore alternatives for fast address recognition. Hashing is one such alternative which can help minimize the address search time in adapters, bridges, routers, gateways, and name servers. Using a trace of address references, we compared the efficiency of several different hashing functions and found that the cyclic redundancy checking (CRC) polynomials provide excellent hashing functions. For software implementation, Fletcher checksum provides a good hashing function. Straightforward folding of address octets using the exclusive-or operation is also a good hashing function. For some applications, bit extraction from the address can be used. Guidelines are provided for determining the size of hash mask required to achieve a specified level of performance.
Characteristics of Destination Address Locality in Computer Networks: A Comparison of Caching Schemes
- Computer Networks and ISDN Systems
, 1989
"... The size of computer networks, along with their bandwidths, is growing exponentially. To support these large, high-speed networks, it is necessary to be able to forward packets in a few microseconds. One part of the forwarding operation consists of searching through a large address database. This pr ..."
Abstract
-
Cited by 48 (1 self)
- Add to MetaCart
(Show Context)
The size of computer networks, along with their bandwidths, is growing exponentially. To support these large, high-speed networks, it is necessary to be able to forward packets in a few microseconds. One part of the forwarding operation consists of searching through a large address database. This problem is encountered in the design of adapters, bridges, routers, gateways, and name servers. Caching can reduce the lookup time if there is a locality in the address reference pattern. Using a destination reference trace measured on an extended local area network, we attempt to see if the destination references do have a significant locality. We compared the performance of MIN, LRU, FIFO, and random cache replacement algorithms. We found that the interactive (terminal) traffic in our sample had quite different locality behavior than that of the noninteractive traffic. The interactive traffic did not follow the LRU stack model while the noninteractive traffic did. Examples are shown of the e...
Addressing reality: An architectural response to demands on the evolving Internet
- In ACM SIGCOMM Workshop on Future Directions in Network Architecture
, 2003
"... A system as complex as the Internet can only be designed effectively if it is based on a core set of design principles, or tenets, that identify points in the architecture where there must be common understanding and agreement. The tenets of the original Internet architecture [6] arose as a response ..."
Abstract
-
Cited by 37 (1 self)
- Add to MetaCart
(Show Context)
A system as complex as the Internet can only be designed effectively if it is based on a core set of design principles, or tenets, that identify points in the architecture where there must be common understanding and agreement. The tenets of the original Internet architecture [6] arose as a response to the technical, governmental, and societal environment of internetworking’s earliest days, but have remained central to the Internet as it has evolved. In light of the increasing integration of the Internet into the social, economic, and political aspects of our lives, it is worth revisiting the underlying tenets of what is becoming a central element of the world’s infrastructure. This paper examines three key tenets that we believe should guide the evolution of the Internet in its next generation and beyond. They are: design for change, controlled transparency, and the centrality of the tussle space. [8] Our purpose is not to present these ideas as new, but rather to propose that they should be elevated to central tenets of the evolving architecture of the Internet, and explore the ramifications of doing so. The paper first examines the tenets somewhat abstractly, and then in more detail by studying their relation to several design choices needed for a complete architecture. We conclude with a discussion of the relationship between the network architecture and the applications it serves.
Measurements of Wide Area Internet Traffic
, 1989
"... Measurement and analysis of current behavior are valuable techniques for the study of computer networks. In addition to providing insight into the operation and usage patterns of present networks, the results can be used to create realistic models of existing traffic sources. Such models are a key c ..."
Abstract
-
Cited by 33 (5 self)
- Add to MetaCart
Measurement and analysis of current behavior are valuable techniques for the study of computer networks. In addition to providing insight into the operation and usage patterns of present networks, the results can be used to create realistic models of existing traffic sources. Such models are a key component of the analytic and simulation studies often undertaken in the design of future networks. This paper presents measurements of wide area Internet traffic gathered at the junction between a large industrial research laboratory and the rest of the Internet. Using bar graphs and histograms, it shows the statistics obtained for packet counts, byte counts, and packet length frequencies, broken down by major transport protocols and network services. For the purpose of modeling wide area traffic, the histograms are of particular interest because they concisely characterize the distribution of packet lengths produced by different wide area network services such as file transfer, remote login...
Modeling Replica Divergence in a Weak-Consistency Protocol for Global-Scale Distributed Data Bases
, 1993
"... this paper. References ..."