Results 11 - 20
of
184,734
Tangible bits: towards seamless interfaces between people, bits and atoms
- Proceedings of the SIGCHI conference on Human factors in computing systems, ACM Press: 234--241
, 1997
"... This paper presents our vision of Human Computer Interaction (HCI): "Tangible Bits. " Tangible Bits allows users to "grasp & manipulate " bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also ..."
Abstract
-
Cited by 1390 (61 self)
- Add to MetaCart
This paper presents our vision of Human Computer Interaction (HCI): "Tangible Bits. " Tangible Bits allows users to "grasp & manipulate " bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems – the metaDESK, transBOARD and ambientROOM – to identify underlying research issues. Keywords tangible user interface, ambient media, graspable user interface, augmented reality, ubiquitous computing, center and periphery, foreground and background
Security Architecture for the Internet Protocol
- RFC 1825
, 1995
"... Content-Type: text/plain ..."
Receiver-driven Layered Multicast
, 1996
"... State of the art, real-time, rate-adaptive, multimedia applications adjust their transmission rate to match the available network capacity. Unfortunately, this source-based rate-adaptation performs poorly in a heterogeneous multicast environment because there is no single target rate — the conflicti ..."
Abstract
-
Cited by 749 (22 self)
- Add to MetaCart
State of the art, real-time, rate-adaptive, multimedia applications adjust their transmission rate to match the available network capacity. Unfortunately, this source-based rate-adaptation performs poorly in a heterogeneous multicast environment because there is no single target rate — the conflicting bandwidth requirements of all receivers cannot be simultaneously satisfied with one transmission rate. If the burden of rate-adaption is moved from the source to the receivers, heterogeneity is accommodated. One approach to receiver-driven adaptation is to combine a layered source coding algorithm with a layered transmission system. By selectively forwarding subsets of layers at constrained network links, each user receives the best quality signal that the network can deliver. We and others have proposed that selective-forwarding be carried out using multiple IP-Multicast groups where each receiver specifies its level of subscription by joining a subset of the groups. In this paper, we extend the multiple group framework with a rate-adaptation protocol called Receiver-driven Layered Multicast, or RLM. Under RLM, multicast receivers adapt to both the static heterogeneity of link bandwidths as well as dynamic variations in network capacity (i.e., congestion). We describe the RLM protocol and evaluate its performance with a preliminary simulation study that characterizes user-perceived quality by assessing loss rates over multiple time scales. For the configurations we simulated, RLM results in good throughput with transient short-term loss rates on the order of a few percent and long-term loss rates on the order of one percent. Finally, we discuss our implementation of a software-based Internet video codec and its integration with RLM.
Internet time synchronization: The network time protocol
, 1989
"... This memo describes the Network Time Protocol (NTP) designed to distribute time information in a large, diverse internet system operating at speeds from mundane to lightwave. It uses a returnabletime architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchi ..."
Abstract
-
Cited by 617 (15 self)
- Add to MetaCart
This memo describes the Network Time Protocol (NTP) designed to distribute time information in a large, diverse internet system operating at speeds from mundane to lightwave. It uses a returnabletime architecture in which a distributed subnet of time servers operating in a self-organizing, hierarchical, master-slave configuration synchronizes local clocks within the subnet and to national time standards via wire or radio. The servers can also redistribute time information within a network via local routing algorithms and time daemons. The architectures, algorithms and protocols which have evolved to NTP over several years of implementation and refinement are described in this paper. The synchronization subnet which has been in regular operation in the Internet for the last several years is described along with performance data which shows that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few tens of milliseconds, even in cases of failure or disruption of clocks, time servers or networks. This memo describes the Network Time Protocol, which is specified as an Internet Standard in
Adaptive clustering for mobile wireless networks
- IEEE Journal on Selected Areas in Communications
, 1997
"... This paper describes a self-organizing, multihop, mobile radio network, which relies on a code division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled and are dynamically reconfig ..."
Abstract
-
Cited by 556 (11 self)
- Add to MetaCart
This paper describes a self-organizing, multihop, mobile radio network, which relies on a code division access scheme for multimedia support. In the proposed network architecture, nodes are organized into nonoverlapping clusters. The clusters are independently controlled and are dynamically reconfigured as nodes move. This network architecture has three main advantages. First, it provides spatial reuse of the bandwidth due to node clustering. Secondly, bandwidth can be shared or reserved in a controlled fashion in each cluster. Finally, the cluster algorithm is robust in the face of topological changes caused by node motion, node failure and node insertion/removal. Simulation shows that this architecture provides an efficient, stable infrastructure for the integration of different types of traffic in a dynamic radio network. 1.
A Hierarchical Internet Object Cache
- IN PROCEEDINGS OF THE 1996 USENIX TECHNICAL CONFERENCE
, 1995
"... This paper discusses the design andperformance of a hierarchical proxy-cache designed to make Internet information systems scale better. The design was motivated by our earlier trace-driven simulation study of Internet traffic. We believe that the conventional wisdom, that the benefits of hierarch ..."
Abstract
-
Cited by 501 (6 self)
- Add to MetaCart
This paper discusses the design andperformance of a hierarchical proxy-cache designed to make Internet information systems scale better. The design was motivated by our earlier trace-driven simulation study of Internet traffic. We believe that the conventional wisdom, that the benefits of hierarchical file caching do not merit the costs, warrants reconsideration in the Internet environment. The cache implementation supports a highly concurrent stream of requests. We present performance measurements that show that the cache outperforms other popular Internet cache implementations by an order of magnitude under concurrent load. These measurements indicate that hierarchy does not measurably increase access latency. Our software can also be configured as a Web-server accelerator; we present data that our httpd-accelerator is ten times faster than Netscape's Netsite and NCSA 1.4 servers. Finally, we relate our experience fitting the cache into the increasingly complex and operational world of Internet information systems, including issues related to security, transparency to cache-unaware clients, and the role of file systems in support of ubiquitous wide-area information systems.
End-to-End Routing Behavior in the Internet
, 1996
"... The large-scale behavior of routing in the Internet has gone virtually without any formal study, the exception being Chinoy's analysis of the dynamics of Internet routing information [Ch93]. We report on an analysis of 40,000 end-to-end route measurements conducted using repeated “traceroutes” ..."
Abstract
-
Cited by 660 (13 self)
- Add to MetaCart
The large-scale behavior of routing in the Internet has gone virtually without any formal study, the exception being Chinoy's analysis of the dynamics of Internet routing information [Ch93]. We report on an analysis of 40,000 end-to-end route measurements conducted using repeated “traceroutes” between 37 Internet sites. We analyze the routing behavior for pathological conditions, routing stability, and routing symmetry. For pathologies, we characterize the prevalence of routing loops, erroneous routing, infrastructure failures, and temporary outages. We find that the likelihood of encountering a major routing pathology more than doubled between the end of 1994 and the end of 1995, rising from 1.5 % to 3.4%. For routing stability, we define two separate types of stability, “prevalence, ” meaning the overall likelihood that a particular route is encountered, and “persistence, ” the likelihood that a route remains unchanged over a long period of time. We find that Internet paths are heavily dominated by a single prevalent route, but that the time periods over which routes persist show wide variation, ranging from seconds up to days. About 2/3's of the Internet paths had routes persisting for either days or weeks. For routing symmetry, we look at the likelihood that a path through the Internet visits at least one different city in the two directions. At the end of 1995, this was the case half the time, and at least one different autonomous system was visited 30 % of the time.
Multiple sequence alignment with the Clustal series of programs
- Nucleic Acids Res
, 2003
"... The Clustal series of programs are widely used in molecular biology for the multiple alignment of both nucleic acid and protein sequences and for preparing phylogenetic trees. The popularity of the programs depends on a number of factors, including not only the accuracy of the results, but also the ..."
Abstract
-
Cited by 725 (5 self)
- Add to MetaCart
The Clustal series of programs are widely used in molecular biology for the multiple alignment of both nucleic acid and protein sequences and for preparing phylogenetic trees. The popularity of the programs depends on a number of factors, including not only the accuracy of the results, but also the robustness, portability and user-friendliness of the programs. New features include NEXUS and FASTA format output, printing range numbers and faster tree calculation. Although, Clustal was originally developed to run on a local computer, numerous Web servers have been set up, notably at the EBI
Monitoring the future: National survey results on drug use
- I: Secondary school students (NIH Publication No. 05-5726). Bethesda, MD: National Institute on Drug Abuse
, 2005
"... by ..."
Results 11 - 20
of
184,734