SCIENCE and TELECOMMUNICATION TECHNOLOGIES
Citations
1521 | Gradientbased learning applied to document recognition
- Lecun, Bottou, et al.
- 1998
(Show Context)
Citation Context ... behind affectivesanalysis largely lacks in comparison to their counterparts in traditional computer visionsand multimedia tasks.sPromising results obtained using Convolutional Neural Networks (CNNs) =-=[13]-=- in manysfundamental vision tasks have led us to consider the efficacy of such machinery forshigher abstraction tasks like sentiment analysis, i.e. classifying the visual sentiments(either positive or... |
837 | ImageNet: A Large-Scale Hierarchical Image Database
- Deng, Dong, et al.
(Show Context)
Citation Context ...st 1,200 ANP detectors are released under the namesof SentiBank.sCNNs applied to Visual Sentiment AnalysissThe increase in computational power in GPUs and the creation of large image datasetsssuch as =-=[3]-=- have allowed Convolutional Neural Networks (CNNs) to show outstandingsperformance in computer vision challenges [11], [22], [4]. And despite requiring hugesamounts of training samples to tune their m... |
315 | Emotion: A psychoevolutionary synthesis - Plutchik - 1980 |
280 | Visualizing data using t-SNE
- Maaten, Hinton
(Show Context)
Citation Context ...longing to thespositive class and low scores to those belonging to the negative class.s3.3.5.2. t-Distributed Stochastic Neighbor Embeddings(t-SNE)st-Distributed Stochastic Neighbor Embedding (t-SNE) =-=[30]-=- is a dimensionality reductionsalgorithm that seeks a low-dimensional embedding space while preserving highs21sdimensional distance information. This representation has become particular popular insth... |
192 | Caffe: Convolutional architecture for fast feature embedding.
- Jia, Shelhamer, et al.
- 2014
(Show Context)
Citation Context ...rssare followed by pooling and normalization layers, while a pooling layer is placed betweensthe last convolutional layer and the first fully connected one. The experiments weresperformed using Caffe =-=[6]-=-, a publicly available deep learning framework.sFigure 4: Pipeline of the proposed Visual Sentiment Analysis frameworksWe adapted CaffeNet to a sentiment prediction task (see Figure 4) using the Twitt... |
133 | Visualizing and Understanding Convolutional Networks
- Zeiler, Fergus
- 2013
(Show Context)
Citation Context ... analysis actually generates high accuracy rates. On the othershand, it seems that visual sentiment prediction architectures also benefit from a highersamount of convolutional layers, as suggested by =-=[28]-=- for the task of object recognition.sAveraging the prediction over modified versions of the input image (oversampling) resultssin a consistent improvement in the prediction accuracy. This behavior, wh... |
113 |
S.: Cnn features off-the-shelf: an astounding baseline for recognition
- Razavian, Azizpour, et al.
- 2014
(Show Context)
Citation Context ...layer analysis of the fine-tuned network.sFigure 5: Experimental setup for the layer analysis using linear classifierssThe outputs of individual layers have been previously used as visual descriptors =-=[19]-=-, [20],swhere each neuron's activation is seen as a component of the feature vector.sTraditionally, top layers have been selected for this purpose [25] as they are thought tosencode high-level informa... |
71 | Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks.
- Oquab, Bottou, et al.
- 2014
(Show Context)
Citation Context ...er vision challenges [11], [22], [4]. And despite requiring hugesamounts of training samples to tune their millions of parameters, CNNs have proved to besvery effective in domain transfer experiments =-=[16]-=-. This interesting property of CNNs issapplied to the task of visual sentiment prediction in [25], where the winning architecture ofsILSVRC 2012 [11] (5 convolutional and 3 fully connected layers) is ... |
56 |
Affective image classification using features inspired by psychology and art theory.
- Machajdik, Hanbury
- 2010
(Show Context)
Citation Context ... towards bridging the affective gap, or the conceptualsand computational divide between low-level features and high-level affective semantics,shave been presented over the years for visual multimedia =-=[14]-=-,[5],[1], [9], but thesperformance has remained fairly conservative. In addition, the intuition behind affectivesanalysis largely lacks in comparison to their counterparts in traditional computer visi... |
46 |
Going deeper with convolutions. arXiv preprint arXiv:1409.4842,
- Szegedy, Liu, et al.
- 2014
(Show Context)
Citation Context ... computational power in GPUs and the creation of large image datasetsssuch as [3] have allowed Convolutional Neural Networks (CNNs) to show outstandingsperformance in computer vision challenges [11], =-=[22]-=-, [4]. And despite requiring hugesamounts of training samples to tune their millions of parameters, CNNs have proved to besvery effective in domain transfer experiments [16]. This interesting property... |
40 | Delving Deep into Rectifiers: Surpassing HumanLevel Performance on ImageNet Classification. arXiv preprint arXiv:1502.01852,
- He, Zhang, et al.
- 2015
(Show Context)
Citation Context ...understanding of viewer responses to advertisements using facialsexpressions [15]. However, while machines are approaching human performance onsseveral recognition tasks, such as image classification =-=[4]-=-, the task of automaticallysdetecting sentiments and emotions from images and videos still presents many unsolvedschallenges. Numerous approaches towards bridging the affective gap, or the conceptuals... |
21 | Intriguing properties of neural networks - Szegedy, Zaremba, et al. - 2014 |
20 | Emotional valence categorization using holistic image features - Yanulevskaya, Gemert, et al. - 2008 |
16 | The wisdom of social multimedia: using flickr for prediction and forecast - Jin - 2010 |
14 | Can we understand van gogh’s mood? learning to infer affects from images in social networks
- Jia, Wu, et al.
- 2012
(Show Context)
Citation Context ...rds bridging the affective gap, or the conceptualsand computational divide between low-level features and high-level affective semantics,shave been presented over the years for visual multimedia [14],=-=[5]-=-,[1], [9], but thesperformance has remained fairly conservative. In addition, the intuition behind affectivesanalysis largely lacks in comparison to their counterparts in traditional computer visionsa... |
12 | Analyzing and predicting sentiment of images on the social web.
- Siersdorfer, Minack, et al.
- 2010
(Show Context)
Citation Context ...ead of Pyxel.s14s2. State of the artsVisual sentiment analysissSeveral approaches towards overcoming the gap between visual features and affectivessemantic concepts can be found in the literature. In =-=[21]-=-, the authors explore the potentialsof two low-level descriptors common in object recognition, Color Histograms (LCH, GCH)sand SIFT-based Bag-of-Words, for the task of visual sentiment prediction. Som... |
11 | Deep learning using linear support vector machines,”
- Tang
- 2013
(Show Context)
Citation Context ... the suitability of Support Vector Machines for classificationsusing deep learning descriptors [19] while others have also replaced the Softmax loss bysa SVM cost function in the network architecture =-=[24]-=-. Given the results of our layer-wises24sanalysis, it is not possible to claim that any of the two classifiers provides a consistentsgain compared to the other for visual sentiment analysis, at least,... |
6 | Deep learning for robust feature generation in audiovisual emotion recognition,” - Kim, Lee, et al. - 2013 |
6 | Robust image sentiment analysis using progressively trained and domain transferred deep networks.
- You, Luo, et al.
- 2015
(Show Context)
Citation Context ...acy of such machinery forshigher abstraction tasks like sentiment analysis, i.e. classifying the visual sentiments(either positive or negative) that an image provokes to a human. Recently, some workss=-=[27]-=-, [25] explored CNNs for the task of visual sentiment analysis and obtained somesencouraging results that outperform the state of the art, but develop very little intuitionsand analysis into the CNN a... |
5 |
Cultural event recognition with visual convnets and temporal models
- Salvador, Zeppelzauer, et al.
(Show Context)
Citation Context ...figurestoo high for training the network from scratch with the limited amount of data available insthe Twitter dataset. Given the good results achieved by previous works about transferslearning [16], =-=[20]-=-, we decided to explore the possibility of fine-tuning an already existingsmodel. Fine-tuning consists in initializing the weights in each layer but the last one withsthose values learned from another... |
5 |
A.: Object detectors emerge in deep scene cnns
- ZHOU, KHOSLA, et al.
- 2015
(Show Context)
Citation Context ...focuses on acquiring insightsinto unsolved questions in the problem of visual sentiment prediction using CNNs whichswere originally trained for object detection, with a similar goal as the authors of =-=[29]-=-sstudied object detectors in a CNN trained for places. We address such task using finetuned networks and assessing the contribution of each layer in the former architectures tosthe overall performance... |
4 | Predicting viewer perceived emotions in animated gifs.
- Jou, Bhattacharya, et al.
- 2014
(Show Context)
Citation Context ...ing the affective gap, or the conceptualsand computational divide between low-level features and high-level affective semantics,shave been presented over the years for visual multimedia [14],[5],[1], =-=[9]-=-, but thesperformance has remained fairly conservative. In addition, the intuition behind affectivesanalysis largely lacks in comparison to their counterparts in traditional computer visionsand multim... |
2 | L.-J.: Visual Sentiment Prediction with Deep Convolutional Neural Networks. In: arXiv preprint arXiv:1411.5731
- XU, CETINTAS, et al.
- 2014
(Show Context)
Citation Context ... such machinery forshigher abstraction tasks like sentiment analysis, i.e. classifying the visual sentiments(either positive or negative) that an image provokes to a human. Recently, some workss[27], =-=[25]-=- explored CNNs for the task of visual sentiment analysis and obtained somesencouraging results that outperform the state of the art, but develop very little intuitionsand analysis into the CNN archite... |
1 | R.: Predicting Ad Liking and Purchase Intent: Large-scale Analysis of Facial Responses to Ads
- MCDUFF, KALIOUBY, et al.
- 2014
(Show Context)
Citation Context ..., medicine orsentertainment. Some interesting preliminary applications are already beginning to emerge,se.g. for emotional understanding of viewer responses to advertisements using facialsexpressions =-=[15]-=-. However, while machines are approaching human performance onsseveral recognition tasks, such as image classification [4], the task of automaticallysdetecting sentiments and emotions from images and ... |
1 | A.: A Mixed Bag of Emotions: Model, Predict, and Transfer Emotion Distributions
- PENG, CHEN, et al.
(Show Context)
Citation Context ...ks have considered the use of descriptors inspired by art and psychology to addressstasks such as visual emotion classification [14] or automatic image adjustment towards ascertain emotional reaction =-=[17]-=-. In [1], a Visual Sentiment Ontology based on psychologystheories and web mining consisting of 3,000 Adjective Noun Pairs (ANP) is built. ThesesANPs serve as a mid-level representation that attempt t... |