Results 1 - 10
of
19
Activity-Driven, Event-Based Vision Sensors
"... Abstract ‐ The four chips [1‐4] presented in the special session on “Activity‐driven, event‐based vision sensors ” quickly output compressed digital data in the form of events. These sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers. The digital sens ..."
Abstract
-
Cited by 19 (4 self)
- Add to MetaCart
(Show Context)
Abstract ‐ The four chips [1‐4] presented in the special session on “Activity‐driven, event‐based vision sensors ” quickly output compressed digital data in the form of events. These sensors reduce redundancy and latency and increase dynamic range compared with conventional imagers. The digital sensor output is easily interfaced to conventional digital post processing, where it reduces the latency and cost of post processing compared to imagers. The asynchronous data could spawn a new area of DSP that breaks from conventional Nyquist rate signal processing. This paper reviews the rationale and history of this event‐based approach, introduces sensor functionalities, and gives an overview of the papers in this session. The paper concludes with a
Ultralow-power electronics for biomedical applications,”
- Annual Review: Biomedical Engineering,
, 2008
"... Abstract The electronics of a general biomedical device consist of energy delivery, analog-to-digital conversion, signal processing, and communication subsystems. Each of these blocks must be designed for minimum energy consumption. Specific design techniques, such as aggressive voltage scaling, dy ..."
Abstract
-
Cited by 17 (1 self)
- Add to MetaCart
Abstract The electronics of a general biomedical device consist of energy delivery, analog-to-digital conversion, signal processing, and communication subsystems. Each of these blocks must be designed for minimum energy consumption. Specific design techniques, such as aggressive voltage scaling, dynamic power-performance management, and energy-efficient signaling, must be employed to adhere to the stringent energy constraint. The constraint itself is set by the energy source, so energy harvesting holds tremendous promise toward enabling sophisticated systems without straining user lifestyle. Further, once harvested, efficient delivery of the low-energy levels, as well as robust operation in the aggressive low-power modes, requires careful understanding and treatment of the specific design limitations that dominate this realm. We outline the performance and power constraints of biomedical devices, and present circuit techniques to achieve complete systems operating down to power levels of microwatts. In all cases, approaches that leverage advanced technology trends are emphasized.
Analog VLSI Biophysical Neurons and Synapses With Programmable Membrane Channel Kinetics
- IEEE Trans. Biomed. Circuits Syst
, 2010
"... Abstract—We present and characterize an analog VLSI network of 4 spiking neurons and 12 conductance-based synapses, implementing a silicon model of biophysical membrane dynamics and detailed channel kinetics in 384 digitally programmable parameters. Each neuron in the analog VLSI chip (NeuroDyn) imp ..."
Abstract
-
Cited by 12 (4 self)
- Add to MetaCart
(Show Context)
Abstract—We present and characterize an analog VLSI network of 4 spiking neurons and 12 conductance-based synapses, implementing a silicon model of biophysical membrane dynamics and detailed channel kinetics in 384 digitally programmable parameters. Each neuron in the analog VLSI chip (NeuroDyn) implements generalized Hodgkin-Huxley neural dynamics in 3 channel variables, each with 16 parameters defining channel conductance, reversal potential, and voltage-dependence profile of the channel kinetics. Likewise, 12 synaptic channel variables implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The biophysical origin of all 384 parameters in 24 channel variables supports direct interpretation of the results of adapting/tuning the parameters in terms of neurobiology. We present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. Uniform temporal scaling of the dynamics of membrane and gating variables is demonstrated by tuning a single current parameter, yielding variable speed output exceeding real time. The 0.5 m CMOS chip measures 3 mm 3 mm, and consumes 1.29 mW. Index Terms—Neuromorphic engineering, reconfigurable neural and synaptic dynamics, silicon neurons, subthreshold metal–oxide semiconductor (MOS), translinear circuits. I.
A video time encoding machine
- in Proceedings of the 15th IEEE International Conference on Image Processing (ICIP ’08
, 2008
"... Time encoding is a real-time asynchronus mechanism of mapping analog amplitude information into multidimensional time sequences. We investigate the exact representation of analog video streams with a Time Encoding Machine realized with a population of spiking neurons. We also provide an algorithm th ..."
Abstract
-
Cited by 9 (6 self)
- Add to MetaCart
Time encoding is a real-time asynchronus mechanism of mapping analog amplitude information into multidimensional time sequences. We investigate the exact representation of analog video streams with a Time Encoding Machine realized with a population of spiking neurons. We also provide an algorithm that perfectly recovers streaming video from the spike trains of the neural population. Finally, we analyze the quality of recovery of a space-time separable video stream encoded with a population of integrate-and-fire neurons and demonstrate that the quality of recovery increases as a function of the population size. Index Terms — time encoding, video coding, integrateand-fire neurons, frames, Gabor wavelets 1.
On real-time AER 2D convolutions hardware for neuromorphic spike based cortical processing
- IEEE Trans. Neural Networks
, 2008
"... ..."
A 3.6μs latency asynchronous framefree event-based dynamic vision sensor
- IEEE J. Solid State Circ
, 2011
"... Abstract−This paper presents a 128x128 dynamic vision sensor. Each pixel detects temporal changes in the local illumination. A minimum illumination temporal contrast of 10 % can be detected. A compact preamplification stage has been introduced that allows to improve the minimum detectable contrast o ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
Abstract−This paper presents a 128x128 dynamic vision sensor. Each pixel detects temporal changes in the local illumination. A minimum illumination temporal contrast of 10 % can be detected. A compact preamplification stage has been introduced that allows to improve the minimum detectable contrast over previous designs, while at the same time reducing the pixel area by 1/3. The pixel responds to illumination changes in less than 3.6µs. The ability of the sensor to capture very fast moving objects, rotating at 10K revolutions per second, has been verified experimentally. A frame-based sensor capable to achieve this, would require at least 100K frames per second. I.
A 128 128 120 dB 15 s Latency Asynchronous Temporal Contrast Vision Sensor
"... Abstract—This paper describes a 128 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-e ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
(Show Context)
Abstract—This paper describes a 128 128 pixel CMOS vision sensor. Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. These events appear at the output of the sensor as an asynchronous stream of digital pixel addresses. These address-events signify scene reflectance change and have sub-millisecond timing precision. The output data rate depends on the dynamic content of the scene and is typically orders of magnitude lower than those of conventional frame-based imagers. By combining an active continuous-time front-end logarithmic photoreceptor with a self-timed switched-capacitor differencing circuit, the sensor achieves an array mismatch of 2.1 % in relative intensity event threshold and a pixel bandwidth of 3 kHz under 1 klux scene illumination. Dynamic range is 120 dB and chip power consumption is 23 mW. Event latency shows weak light dependency with a minimum of 15 sat 1 klux pixel illumination. The sensor is built in a 0.35 m 4M2P process. It has 40 40 m2 pixels with 9.4 % fill factor. By providing high pixel bandwidth, wide dynamic range, and precisely timed sparse digital output, this silicon retina provides an attractive combination of characteristics for low-latency dynamic vision under uncontrolled illumination with low post-processing requirements. Index Terms—Address-event representation (AER), asynchronous vision sensor, high-speed imaging, image sensors, machine vision, neural network hardware, neuromorphic circuit, robot vision systems, visual system, wide dynamic range imaging. I.
A Calibration Technique for Very Low Current and Compact Tunable Neuromorphic Cells: Application
"... Abstract—Low current applications, like neuromorphic circuits, where operating currents can be as low as a few nanoamperes or less, suffer from huge transistor mismatches, resulting in around or less than 1-bit precisions. Recently, a neuromorphic programmable-kernel 2-D convolution chip has been re ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
Abstract—Low current applications, like neuromorphic circuits, where operating currents can be as low as a few nanoamperes or less, suffer from huge transistor mismatches, resulting in around or less than 1-bit precisions. Recently, a neuromorphic programmable-kernel 2-D convolution chip has been reported where each pixel included two compact calibrated digital-to-analog converters (DACs) of 5-bit resolution, for currents down to picoamperes. Those DACs were based on MOS ladder structures, which although compact require 3 +1unit transistors ( is the number of calibration bits). Here, we present a new calibration approach not based on ladders, but on individually calibratable current sources made with MOS transistors of digitally adjustable length, which require only-sized transistors. The scheme includes a translinear circuit-based tuning scheme, which allows us to expand the operating range of the calibrated circuits with graceful precision degradation, over four decades of operating currents. Experimental results are provided for 5-bit resolution DACs operating at 20 nA using two different translinear tuning schemes. Maximum measured precision is 5.05 and 7.15 b, respectively, for the two DAC schemes. Index Terms—Analog, calibration, mismatch, subthreshold. I.
A highperformance hardware architecture for a frameless stereo vision algorithm implemented on a fpga platform
- IEEE Conference on Computer Vision and Pattern Recognition
, 2014
"... As a novelty, in this paper we present an event-based stereo vision matching approach based on time-correlation using segmentation to restrict the matching process to ac-tive image areas, exploiting the event-driven behavior of a silicon retina sensor. Stereo matching is used in depth generating cam ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
As a novelty, in this paper we present an event-based stereo vision matching approach based on time-correlation using segmentation to restrict the matching process to ac-tive image areas, exploiting the event-driven behavior of a silicon retina sensor. Stereo matching is used in depth generating camera systems for solving the correspondence problem and reconstructing 3D data. Using convention-ally frame-based cameras, this correspondence problem is a time consuming and computationally expensive task. To overcome this issue, embedded systems can be used to speed up the calculation of stereo matching results. The sili-con retina delivers asynchronous events if the illumination changes instead of synchronous intensity or color images. It provides sparse input data and therefore the output of the stereo vision algorithm (depth map) is also sparse. The high temporal resolution of such event-driven sensors leads to high data rates. To handle these and the correspondence problem in real time, we implemented our stereo matching algorithm for a field programmable gate array (FPGA). The results show that our matching criterion, based on the time of occurrence of an event, leads to a small average distance error and the parallel hardware architecture and efficient memory utilization results in a frame rate of up to 1140fps. 1.
Embedded neuromorphic vision for humanoid robots
- In ECVW
, 2011
"... We are developing an embedded vision system for the humanoid robot iCub, inspired by the biology of the mammalian visual system, including concepts such as stimulusdriven, asynchronous signal sensing and processing. It comprises stimulus-driven sensors, a dedicated embedded processor and an event-ba ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
We are developing an embedded vision system for the humanoid robot iCub, inspired by the biology of the mammalian visual system, including concepts such as stimulusdriven, asynchronous signal sensing and processing. It comprises stimulus-driven sensors, a dedicated embedded processor and an event-based software infrastructure for processing visual stimuli. These components are integrated with the existing standard machine vision modules currently implemented on the robot, in a configuration that exploits the best features of both: the high resolution, color, framebased vision and the neuromorphic low redundancy, wide dynamic range and high temporal resolution event-based sensors. This approach seeks to combine various styles of vision hardware with sensorimotor systems to complement and extend the current state-of-the art. 1.