### Citations

2831 | R.C.: Online learning with kernels
- Kivinen, Smola, et al.
- 2002
(Show Context)
Citation Context ...bert space H. The projection Pi : Rn 7→ H which maps to the low dimensional space be a compact linear operator. Let K̃ = 〈Φ(X̃),Φ(X̃)〉H be the kernel matrix associated with H. The representer theorem =-=[20]-=- states that Pi can be represented as Pi = Φ(Xi)Ai for some matrix Ai ∈ RNi×n. Using the above expression for projection matrices, we redefine the cost functions and the equality constraints as C1(Ã)... |

958 | Sparse coding with an over-complete basis set: A strategy employed by
- Olshausen, Field
- 1997
(Show Context)
Citation Context ...ion was devoted to building a dictionary using off-the-shelf or parametric bases. The notion of building a dictionary from data instead of a predefined set of bases was studied by Olshausen and Field =-=[14]-=- in their seminal work. Data driven dictionaries have since yielded encouraging results among tasks like restoration [3], super-resolution [26, 23] and classification [25]. The effectiveness of these ... |

936 | Robust face recognition via sparse representation.
- Wright, Yang, et al.
- 2009
(Show Context)
Citation Context ...udied by Olshausen and Field [14] in their seminal work. Data driven dictionaries have since yielded encouraging results among tasks like restoration [3], super-resolution [26, 23] and classification =-=[25]-=-. The effectiveness of these dictionaries in such diverse range of applications can be attributed to their superior ability in adapting to a particular set of data. However we might encounter situatio... |

637 | Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,”
- Pati, Rezaiifar, et al.
- 1993
(Show Context)
Citation Context ...mple, using the projection matrix P∗i , zt = P ∗T i Φ(xt) = A T i Kt where Kt = 〈Φ(Xi),Φ(xt)〉H 2. Compute the sparse code s̄t of the embedded test sample over the dictionary D using the OMP algorithm =-=[16]-=- s̄t = argmin s ‖xt −Ds‖2F s.t. ‖s‖0 ≤ T0 3. The test sample can now be allocated to class c, if the reconstruction error using the class specific dictionary Dc and the corresponding sparse code s̄ct ... |

599 |
Image denoising via sparse and redundant representations over learned dictionaries,”
- Elad, Aharon
- 2006
(Show Context)
Citation Context ...m data instead of a predefined set of bases was studied by Olshausen and Field [14] in their seminal work. Data driven dictionaries have since yielded encouraging results among tasks like restoration =-=[3]-=-, super-resolution [26, 23] and classification [25]. The effectiveness of these dictionaries in such diverse range of applications can be attributed to their superior ability in adapting to a particul... |

578 | Manifold regularization: A geometric framework for learning from examples.
- Belkin, Niyogi, et al.
- 2006
(Show Context)
Citation Context ... preserve much of the information which is available in the original domains. To facilitate such preservation, we wish to minimize the following cost function which includes a manifold regularization =-=[1]-=- term for data from each domain: C1(P1,P2) = tr(PT1X1L1XT1P1) + tr(PT2X2L2XT2P2), where tr(·) is the trace of a matrix and L1, L2 are the normalized graph-Laplacian matrices associated with the neares... |

448 |
Caltech-256 object category dataset.
- Griffin, Holub, et al.
- 2007
(Show Context)
Citation Context ...c) (d) Figure 1. some backpack images of (a) Amazon, (b) DSLR, (c) Webcam & (d) Caltech-256. low resolution images. It has 4, 652 images and 31 classes. In addition, we choose the Caltech-256 dataset =-=[7]-=- as the fourth domain. Fig. (1) shows some BACKPACK images of all the four domains. We choose two different scenarios to test our algorithm. In the first scenario, we use 10 classes common to all four... |

163 | Adapting visual category models to new domains. In ECCV.
- Saenko
- 2010
(Show Context)
Citation Context ...d computationally expensive, making it infeasible for many practical applications. The idea of adapting classifiers to new domains has attracted a tremendous amount of interest recently, and a number =-=[19, 11, 5, 9]-=- of methods have been proposed. Jhuo et al. [9] proposed learning a transformation of source data onto the target space, such that the joint representation is low-rank. However, they do not effectivel... |

117 | A kernel method for the two-sample-problem”, - Gretton, Borgwardt, et al. - 2007 |

111 | and T.Darrell. What you saw is not what you get: Domain adaptation using asymmetric kernel transforms.
- Kulis, Saenko
- 2011
(Show Context)
Citation Context ...d computationally expensive, making it infeasible for many practical applications. The idea of adapting classifiers to new domains has attracted a tremendous amount of interest recently, and a number =-=[19, 11, 5, 9]-=- of methods have been proposed. Jhuo et al. [9] proposed learning a transformation of source data onto the target space, such that the joint representation is low-rank. However, they do not effectivel... |

102 | Domain Adaptation for Object Recognition: An Unsupervised Approach. In
- Gopalan, Li, et al.
- 2011
(Show Context)
Citation Context ...d computationally expensive, making it infeasible for many practical applications. The idea of adapting classifiers to new domains has attracted a tremendous amount of interest recently, and a number =-=[19, 11, 5, 9]-=- of methods have been proposed. Jhuo et al. [9] proposed learning a transformation of source data onto the target space, such that the joint representation is low-rank. However, they do not effectivel... |

102 | Domain adaptation via transfer component analysis.
- Pan, Tsang, et al.
- 2011
(Show Context)
Citation Context ...ven in the reduced space. We seek to minimize this domain shift. To realize this, a natural strategy is to make the data distributions of both the domains as close as possible. In our work, we follow =-=[6, 15, 12]-=- and use the Maximum Mean Discrepancy (MMD) as the distance measure between the data distributions. It computes the distance between the sample means of both the distributions: C2(P1,P2) = ∥∥∥∥∥∥ 1N1 ... |

97 | Geodesic flow kernel for unsupervised domain adaptation.
- Gong, Shi, et al.
- 2012
(Show Context)
Citation Context ...amples per class for Webcam/DSLR when used as a source domain. We use 3 samples per class for all the four domains when used as the target for testing. We compare our results with those obtained from =-=[19, 4, 27, 5, 21]-=-. Features for images. We used the 800 bin SURF features provided by [19] for Amazon, Webcam and DSLR domains. For the Caltech domain, the 800 bin SURF features provided by [21] are used. Parameter se... |

97 |
Classification and clustering via dictionary learning with structured incoherence and shared features”,
- Ramirez, Sprechmann, et al.
- 2010
(Show Context)
Citation Context ...e Dictionaries The dictionary learned using above approach can reconstruct data from multiple domains well, but it cannot discriminate among the data from different classes. Following recent advances =-=[18, 27]-=- in learning discriminative dictionaries, we split the dictionary D into class specific dictionaries {D1, · · ·DC}, where C is the total number of classes. We modify the cost function C3 as: C3(D, P̃,... |

72 | Fisher Discrimination Dictionary Learning for Sparse Representation,”
- Yang, Zhang, et al.
- 2011
(Show Context)
Citation Context ...e Dictionaries The dictionary learned using above approach can reconstruct data from multiple domains well, but it cannot discriminate among the data from different classes. Following recent advances =-=[18, 27]-=- in learning discriminative dictionaries, we split the dictionary D into class specific dictionaries {D1, · · ·DC}, where C is the total number of classes. We modify the cost function C3 as: C3(D, P̃,... |

40 | A feasible method for optimization with orthogonality constraints.
- Wen, Yin
- 2013
(Show Context)
Citation Context ...nd S̃ are fixed. Due to the orthonormality constraint on projection matrices, this step involves optimization on the Stiefel manifold. We solved this problem using the efficient approach presented in =-=[21, 24]-=-. 3.2. Dictionary and Sparse code Update When Ã is fixed, this problem boils down to a discriminative dictionary learning with the data matrix as Z = ÃT K̃. We use the discriminative dictionary lear... |

38 | Factorized latent spaces with structured sparsity.
- Jia, Salzmann, et al.
- 2010
(Show Context)
Citation Context ...naries which are adaptive to these changes is a challenging task, which has been garnering increased interest of late. Earlier works were focussed on learning a dictionary for each domain. Jia et al. =-=[10]-=- considered such a case. But the dimension of the features is often high, hence learning a dictionary for each domain is cumbersome and computationally expensive, making it infeasible for many practic... |

31 | Semicoupled dictionary learning with applications to image super-resolution and photo-sketch synthesis,” in CVPR,
- Wang, Zhang, et al.
- 2012
(Show Context)
Citation Context ...edefined set of bases was studied by Olshausen and Field [14] in their seminal work. Data driven dictionaries have since yielded encouraging results among tasks like restoration [3], super-resolution =-=[26, 23]-=- and classification [25]. The effectiveness of these dictionaries in such diverse range of applications can be attributed to their superior ability in adapting to a particular set of data. However we ... |

30 | Graph regularized sparse coding for image representation - Zheng, Bu, et al. - 2011 |

21 |
Coupled dictionary training for image super-resolution.
- Yang, Wang, et al.
- 2012
(Show Context)
Citation Context ...edefined set of bases was studied by Olshausen and Field [14] in their seminal work. Data driven dictionaries have since yielded encouraging results among tasks like restoration [3], super-resolution =-=[26, 23]-=- and classification [25]. The effectiveness of these dictionaries in such diverse range of applications can be attributed to their superior ability in adapting to a particular set of data. However we ... |

20 | Robust Visual Domain Adaptation with Low-Rank Reconstruction
- Jhuo, Liu, et al.
- 2012
(Show Context)
Citation Context |

18 | Domain adaptive dictionary learning
- Qiu, Patel, et al.
- 2012
(Show Context)
Citation Context ...rojections, which may not result in optimal performance. Among dictionary based methods, Yang et al. [26] and Wang et al. [23] proposed learning dictionary pairs for cross modal synthesis. Qiu et al. =-=[17]-=- proposed learning adaptive dictionaries for smooth domain shifts using regression. However, in practice, domain shifts are wide and often result in abrupt changes among features (eg., increase in res... |

12 | Generalized DomainAdaptive Dictionaries
- Shekhar, Patel, et al.
- 2013
(Show Context)
Citation Context ...ifts using regression. However, in practice, domain shifts are wide and often result in abrupt changes among features (eg., increase in resolution from a webcam image to a DSLR image). Shekhar et al. =-=[21]-=- jointly projected the data onto a low dimensional space by preserving the manifold structure of the data from each domain, and learned a common adaptive dictionary for multiple domains, which can als... |

6 | Sparse embedding: A framework for sparsity promoting dimensionality reduction
- Nguyen, Patel, et al.
- 2012
(Show Context)
Citation Context ...ctionary learning approach presented in [27] to update D and S̃. 4. Test Evaluation As our goal is classification, given a test sample xt from the domain i, we propose the following steps, similar to =-=[21, 13]-=-. We map the sample into kernel space Φ(xt). 1. Compute the low dimensional embedding zt of the sample, using the projection matrix P∗i , zt = P ∗T i Φ(xt) = A T i Kt where Kt = 〈Φ(Xi),Φ(xt)〉H 2. Comp... |

5 | Learning discriminative dictionaries with partially labeled data - Shrivastava, Pillai, et al. - 2012 |

4 |
Frustratingly easy domain adaptation. arXiv preprint arXiv:0907.1815.
- Daume
- 2009
(Show Context)
Citation Context ...y. Such situations occur frequently in many computer vision problems e.g., changes in resolution, illumination and pose of images. Such changes often lead to degradation in classification performance =-=[2]-=-. Learning dictionaries which are adaptive to these changes is a challenging task, which has been garnering increased interest of late. Earlier works were focussed on learning a dictionary for each do... |

4 |
Sparse unsupervised dimensionality reduction for multiple view data
- Han, Wu, et al.
- 2012
(Show Context)
Citation Context ...g a transformation of source data onto the target space, such that the joint representation is low-rank. However, they do not effectively utilize the labeled data to learn the projections. Han et. al =-=[8]-=- learned a shared embedding for different domains, with a sparsity constraint on the representation. Albeit, they treat the step of embedding the data onto a common domain separately rather than joint... |

2 |
Transfer joint matching for unsupervised domain adaptation
- Long, Wang, et al.
- 2014
(Show Context)
Citation Context ...ven in the reduced space. We seek to minimize this domain shift. To realize this, a natural strategy is to make the data distributions of both the domains as close as possible. In our work, we follow =-=[6, 15, 12]-=- and use the Maximum Mean Discrepancy (MMD) as the distance measure between the data distributions. It computes the distance between the sample means of both the distributions: C2(P1,P2) = ∥∥∥∥∥∥ 1N1 ... |