Results 1 -
2 of
2
) CIFAR Senior Fellow. (*) Indicates equal contribution
"... Abstract We introduce a method to train Binarized Neural Networks (BNNs) -neural networks with binary weights and activations at run-time. At train-time the binary weights and activations are used for computing the parameter gradients. During the forward pass, BNNs drastically reduce memory size an ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We introduce a method to train Binarized Neural Networks (BNNs) -neural networks with binary weights and activations at run-time. At train-time the binary weights and activations are used for computing the parameter gradients. During the forward pass, BNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operations, which is expected to substantially improve power-efficiency. To validate the effectiveness of BNNs, we conducted two sets of experiments on the Torch7 and Theano frameworks. On both, BNNs achieved nearly state-of-the-art results over the MNIST, CIFAR-10 and SVHN datasets. We also report our preliminary results on the challenging ImageNet dataset. Last but not least, we wrote a binary matrix multiplication GPU kernel with which it is possible to run our MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. The code for training and running our BNNs is available on-line.
Bitwise Neural Networks
"... Based on the assumption that there exists a neu-ral network that efficiently represents a set of Boolean functions between all binary inputs and outputs, we propose a process for developing and deploying neural networks whose weight param-eters, bias terms, input, and intermediate hid-den layer outp ..."
Abstract
- Add to MetaCart
(Show Context)
Based on the assumption that there exists a neu-ral network that efficiently represents a set of Boolean functions between all binary inputs and outputs, we propose a process for developing and deploying neural networks whose weight param-eters, bias terms, input, and intermediate hid-den layer output signals, are all binary-valued, and require only basic bit logic for the feedfor-ward pass. The proposed Bitwise Neural Net-work (BNN) is especially suitable for resource-constrained environments, since it replaces ei-ther floating or fixed-point arithmetic with signif-icantly more efficient bitwise operations. Hence, the BNN requires for less spatial complexity, less memory bandwidth, and less power consumption in hardware. In order to design such networks, we propose to add a few training schemes, such as weight compression and noisy backpropaga-tion, which result in a bitwise network that per-forms almost as well as its corresponding real-valued network. We test the proposed network on the MNIST dataset, represented using binary features, and show that BNNs result in compet-itive performance while offering dramatic com-putational savings. 1.