Results 1 - 10
of
59
ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems Peter Kogge, Editor & Study Lead
, 2008
"... exchange and its publication does not constitute the Government’s approval or disapproval of its ideas or findings NOTICE Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Governm ..."
Abstract
-
Cited by 50 (1 self)
- Add to MetaCart
exchange and its publication does not constitute the Government’s approval or disapproval of its ideas or findings NOTICE Using Government drawings, specifications, or other data included in this document for any purpose other than Government procurement does not in any way obligate the U.S. Government. The fact that the Government formulated or supplied the drawings, specifications, or other data does not license the holder or any other person or corporation; or convey any rights or permission to manufacture, use, or sell any patented invention that may relate to them. APPROVED FOR PUBLIC RELEASE, DISTRIBUTION UNLIMITED. This page intentionally left blank. DISCLAIMER The following disclaimer was signed by all members of the Exascale Study Group (listed below): I agree that the material in this document reflects the collective views, ideas, opinions and findings of the study participants only, and not those of any of the universities, corporations, or other institutions with which they are affiliated. Furthermore, the material in this document does not reflect the official views, ideas, opinions and/or findings of DARPA, the Department of Defense, or of the United States government.
Fault secure encoder and decoder for nanomemory applications
- Very Large Scale Integration (VLSI) Systems, IEEE Transactions on
, 2009
"... Abstract—Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approa ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
Abstract—Memory cells have been protected from soft errors for more than a decade; due to the increase in soft error rate in logic circuits, the encoder and decoder circuitry around the memory blocks have become susceptible to soft errors as well and must also be protected. We introduce a new approach to design fault-se-cure encoder and decoder circuitry for memory designs. The key novel contribution of this paper is identifying and defining a new class of error-correcting codes whose redundancy makes the de-sign of fault-secure detectors (FSD) particularly simple. We fur-ther quantify the importance of protecting encoder and decoder circuitry against transient errors, illustrating a scenario where the system failure rate (FIT) is dominated by the failure rate of the en-coder and decoder. We prove that Euclidean Geometry Low-Den-sity Parity-Check (EG-LDPC) codes have the fault-secure detector capability. Using some of the smaller EG-LDPC codes, we can tol-erate bit or nanowire defect rates of 10 % and fault rates of upsets/device/cycle, achieving a FIT rate at or below one for the entire memory system and a memory density of bit/cm with nanowire pitch of 10 nm for memory blocks of 10 Mb or larger. Larger EG-LDPC codes can achieve even higher reliability and lower area overhead. Index Terms—Decoder, encoder, fault tolerant, memory, nan-otechnology. I.
The synthesis of robust polynomial arithmetic with stochastic logic
- in DAC ’08
"... As integrated circuit technology plumbs ever greater depths in the scaling of feature sizes, maintaining the paradigm of deterministic Boolean computation is increasingly challeng-ing. Indeed, mounting concerns over noise and uncertainty in signal values motivate a new approach: the design of stocha ..."
Abstract
-
Cited by 17 (11 self)
- Add to MetaCart
As integrated circuit technology plumbs ever greater depths in the scaling of feature sizes, maintaining the paradigm of deterministic Boolean computation is increasingly challeng-ing. Indeed, mounting concerns over noise and uncertainty in signal values motivate a new approach: the design of stochastic logic, that is to say, digital circuitry that pro-cesses signals probabilistically, and so can cope with errors and uncertainty. In this paper, we present a general method-ology for synthesizing stochastic logic for the computation of polynomial arithmetic functions, a category that is impor-tant for applications such as digital signal processing. The method is based on converting polynomials into a particu-lar mathematical form – Bernstein polynomials – and then implementing the computation with stochastic logic. The resulting logic processes serial or parallel streams that are random at the bit level. In the aggregate, the computation becomes accurate, since the results depend only on the pre-cision of the statistics. Experiments show that our method produces circuits that are highly tolerant of errors in the input stream, while the area-delay product of the circuit is comparable to that of deterministic implementations.
Low-density parity check codes for error correction in nanoscale memory,‖ SRI
, 2007
"... The continued scaling of photolithographic fabrication techniques down to 32 nanometers and beyond faces enormous technology and economic barriers. Self-assembled devices such as silicon nanowires or carbon nanotubes show promise to not only achieve aggressive dimensions, but to help address power a ..."
Abstract
-
Cited by 14 (0 self)
- Add to MetaCart
(Show Context)
The continued scaling of photolithographic fabrication techniques down to 32 nanometers and beyond faces enormous technology and economic barriers. Self-assembled devices such as silicon nanowires or carbon nanotubes show promise to not only achieve aggressive dimensions, but to help address power and other
Towards Defect-Tolerant Nanoscale Architectures
- Sixth IEEE Conference on Nanotechnology, IEEE Nano2006
, 2006
"... Abstract — Nanoscale computing systems show great potential but at the same time introduce new challenges not encountered in the world of conventional CMOS designs and manufacturing. For example, these systems need to work around layout and doping constraints resulting from unconventional bottom-up ..."
Abstract
-
Cited by 12 (8 self)
- Add to MetaCart
(Show Context)
Abstract — Nanoscale computing systems show great potential but at the same time introduce new challenges not encountered in the world of conventional CMOS designs and manufacturing. For example, these systems need to work around layout and doping constraints resulting from unconventional bottom-up selfassembly, and need to cope with high manufacturing defect rates and transient faults. Unfortunately, most conventional defecttolerance techniques are not directly applicable in nanoscale systems because they have been designed for very small defect rates. In this paper, we explore built-in defect-tolerance techniques on 2-D semiconductor nanowire (NW) arrays to make designs self-healing. Our approach combines circuit and systemlevel techniques and it does not require defect map extraction, reconfigurable devices, or addressing each cross-point similar to reconfigurable approaches. We show that a defect-tolerant simple processor based on our approach would be still around 3X denser than an 18-nm CMOS version with equivalent functionality; a yield greater than 30 % is achieved despite a fabric with 14 % defective FETs. Keywords-semiconductor nanowire; defect tolerance, processor I.
Fault-Tolerant Nanoscale Processors on Semiconductor Nanowire Grids
, 2007
"... Nanoscale processor designs pose new challenges not encountered in the world of conventional CMOS designs and manufacturing. Nanoscale devices based on crossed semiconductor nanowires (NWs) have promising characteristics in addition to providing great density advantage over conventional CMOS devices ..."
Abstract
-
Cited by 12 (9 self)
- Add to MetaCart
(Show Context)
Nanoscale processor designs pose new challenges not encountered in the world of conventional CMOS designs and manufacturing. Nanoscale devices based on crossed semiconductor nanowires (NWs) have promising characteristics in addition to providing great density advantage over conventional CMOS devices. This density advantage could, however, be easily lost when assembled into nanoscale systems and especially after techniques dealing with high defect rates and manufacturing related layout/doping constraints are incorporated. Most conventional defect/fault-tolerance techniques are not suitable in nanoscale designs because they are designed for very small defect rates and assume arbitrary layouts for required circuits. Reconfigurable approaches face fundamental challenges including a complex interface between the micro and nano components required for programming. In this paper, we present our work on adding fault-tolerance to all components of a processor implemented on a 2-D semiconductor nanowire (NW) fabric called NASICs. We combine and explore structural redundancy, built-in nanoscale error correcting circuitry, and system-level redundancy techniques and adapt the techniques to the NASIC fabric. Faulty signals caused by defects and other error sources are masked on-the-fly at various levels of granularity. Faults can be masked at up to 15 % rates, while maintaining a 7X density advantage compared to an equivalent CMOS processor at projected 18nm technology. Detailed analysis of yield, density, and area tradeoffs is provided for different error sources and fault distributions.
Dynamic Low-Density Parity Check Codes for Fault-tolerant Nanoscale Memory
"... Abstract. New bottom-up techniques can build silicon nanowires (dimension < 10 nm) that exhibit remarkable electronic properties, but with current assembly techniques yield very high defect and fault rates. Nanodevices built using these nanowires have static errors that can be addressed at fabric ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
(Show Context)
Abstract. New bottom-up techniques can build silicon nanowires (dimension < 10 nm) that exhibit remarkable electronic properties, but with current assembly techniques yield very high defect and fault rates. Nanodevices built using these nanowires have static errors that can be addressed at fabrication time by testing and reconfiguration, but soft errors are problematic, with arrival rates expected to vary over the lifetime of a part. In this paper, we propose using a special variant of low-density parity codes (LDPCs) — Euclidean Geometry LDPC (EG-LDPC) codes — to enable dynamic changes in level of fault tolerance. Apart from high error correcting ability and sparsity, a special property of EG-LDPC codes enables us to dynamically adjust the error correcting capacity for improved system performance (e.g., lower power consumption) during periods of expected low fault arrival rate. We present a system architecture for nanomemory based on nanoPLA building blocks using EG-LDPCs, and an analysis of its fault detection and correction capabilities.
Fault Tolerant Nano-Memory with Fault Secure Encoder and Decoder
, 2007
"... We introduce a nanowire-based, sublithographic memory architecture tolerant to transient faults. Both the storage elements and the supporting ECC encoder and corrector are implemented in dense, but potentially unreliable, nanowirebased technology. This compactness is made possible by a recently intr ..."
Abstract
-
Cited by 10 (2 self)
- Add to MetaCart
(Show Context)
We introduce a nanowire-based, sublithographic memory architecture tolerant to transient faults. Both the storage elements and the supporting ECC encoder and corrector are implemented in dense, but potentially unreliable, nanowirebased technology. This compactness is made possible by a recently introduced Fault-Secure detector design [18]. Using Euclidean Geometry error-correcting codes (ECC), we identify particular codes which correct up to 8 errors in data words, achieving a FIT rate at or below one for the entire memory system for bit and nanowire transient failure rates as high as 10 −17 upsets/device/cycle with a total area below 1.7 × the area of the unprotected memory for memories as small as 0.1 Gbit. We explore scrubbing designs and show the overhead for serial error correction and periodic data scrubbing can be below 0.02 % for fault rates as high as 10 −20 upsets/device/cycle. We also present a design to unify the error-correction coding and circuitry used for permanent defect and transient fault tolerance.
W.: 3-D nFPGA: a reconfigurable architecture for 3-D CMOS/nanomaterial hybrid digital circuits
- IEEE Trans. Circuits Syst. I
, 2007
"... Abstract—In this paper, we introduce a novel reconfigurable architecture, named 3-D field-programmable gate array (3-D nFPGA), which utilizes 3-D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nano ..."
Abstract
-
Cited by 8 (4 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, we introduce a novel reconfigurable architecture, named 3-D field-programmable gate array (3-D nFPGA), which utilizes 3-D integration techniques and new nanoscale materials synergistically. The proposed architecture is based on CMOS nanohybrid techniques that incorporate nanomaterials such as carbon nanotube bundles and nanowire crossbars into CMOS fabrication process. This architecture also has built-in features for fault tolerance and heat alleviation. Using unique features of FPGAs and a novel 3-D stacking method enabled by the application of nanomaterials, 3-D nFPGA obtains a 4 footprint reduction comparing to the traditional CMOS-based 2-D FPGAs. With a customized design automation flow, we evaluate the performance and power of 3-D nFPGA driven by the 20 largest MCNC benchmarks. Results demonstrate that 3-D nFPGA is able to provide a performance gain of 2.6 with a small power overhead comparing to the traditional 2-D FPGA architecture. Index Terms—3-D integration, nanoelectronics, nanotube, nanowire, performance, reconfigurable logic.
Nanowire Addressing with Randomized-Contact Decoders
, 2006
"... Methods for assembling crossbars from nanowires (NWs) have been designed and implemented. Methods for controlling individual NWs within a crossbar have also been proposed, but implementation remains a challenge. A NW decoder is a device that controls many NWs with a much smaller number of lithogra ..."
Abstract
-
Cited by 7 (4 self)
- Add to MetaCart
(Show Context)
Methods for assembling crossbars from nanowires (NWs) have been designed and implemented. Methods for controlling individual NWs within a crossbar have also been proposed, but implementation remains a challenge. A NW decoder is a device that controls many NWs with a much smaller number of lithographically produced mesoscale wires (MWs). Unlike traditional demultiplexers, all proposed NW decoders are assembled stochastically. In a randomized-contact decoder (RCD) [11], for example, field-effect transistors are randomly created at about half of the NW/MW junctions. In this paper, we tightly bound the number of MWs required to produce a correctly functioning RCD with high probability. We show that the number of MWs is logarithmic in the number of NWs, even when errors occur. We also analyze the overhead associated with controlling a stochastically assembled decoder. As we explain, lithographically-produced control circuitry must store information regarding which MWs control which NWs. This requires more area than the MWs themselves, but has received little attention elsewhere.