1 Introduction
Deep neural networks (DNNs) and other types of deep learning architecture have made significant advances
[3, 4]. In both wellbenchmarked tasks and realworld applications, such as automatic speech recognition [21, 34, 44] and image recognition [29, 48], deep learning architectures have achieved an unprecedented level of success and have generated major impact.Arguably, the most instrumental factors contributing to their success are: (1) learning from a huge amount of training data for highly complex models with millions to billions of parameters; (2) adopting simple but effective optimization methods such as stochastic gradient descent; (3) combatting overfitting with new schemes such as dropout
[23]; and (4) computing with massive parallelism on GPUs. New techniques as well as “tricks of the trade” are frequently invented and added to the toolboxes for machine learning researchers and practitioners.
In stark contrast, there have been many fewer publicly known successful applications of kernel methods (such as support vector machines) to problems at a scale comparable to the speech and image recognition problems tackled by DNNs. This is a surprising chasm, noting that kernel methods have been extensively studied both theoretically and empirically for their power of modeling highly nonlinear data
[43]. Moreover, the connection between kernel methods and (infinite) neural networks has also been long noted [35, 51, 11].Nonetheless, a common misconception is that it may be difficult, if not impossible, for kernel methods to catch up with deep learning methods in addressing largescale learning problems. In particular, many kernelbased algorithms scale quadratically in the number of training samples. This barrier in computational complexity makes it especially challenging for kernel methods to reap the benefits of learning from a very large amount of data, while deep learning architectures are especially adept at it.
We contend that this skepticism can be sufficiently attenuated. Concretely, in this paper, we investigate and propose new ideas tailored for kernel methods, with the aim of scaling them up to take on challenging problems in computer vision and automatic speech recognition.
To this end, we build on the work by [38]
on approximating kernel functions with features derived from random projections. Our innovation is, however, to advance the stateoftheart to a much larger scale. Concretely, we propose fast training methods for models with hundreds of millions of parameters — these methods are necessary for classifiers using hundreds of thousands of features to recognize thousands of categories. We also propose scalable methods for combining multiple kernels as ways of learning feature representations. Interestingly, we show multiplicative combination of kernels scale better than additive combinations.
We validate our approaches with extensive empirical studies. We contrast kernel models to DNNs on 4 largescale benchmark datasets, some of which are often used to demonstrate the effectiveness of DNNs. We show that the performance of largescale kernel models approaches or surpasses their deep learning counterparts, which are either exhaustively optimized by us or are wellaccepted as yardsticks in industry standards.
While providing a recipe to obtain stateoftheart largescale kernel models, another important contribution of our work is to shed light on new perspectives and opportunities for future study. The techniques we have developed are easy to implement, readily reproducible, and incur much less computational cost (for hyperparameter tuning and model selection) than deep learning architectures. Thus, they are valuable tools, tested and verified to be effective for constructing comparative systems.
Comparative studies enabled by such systems will, in our view be indispensable in pursuing the higher goal of exploring and acquiring an understanding of how the two camps of methods differ, for instance in learning new representations of the original data^{2}^{2}2Note this inquiry would be informative only if both kernel methods and deep learning methods attain similar performance yet exploit different aspects of data.. As an example, we show that combining kernel models and DNNs improves over either individual model, suggesting that the two paradigms learn different yet complementary representations from the data. We believe that research in this line will offer deep insights, and broaden the theory and practice of designing alternative methods to both DNNs and kernel methods for largescale learning.
The rest of the paper is organized as follows. We briefly review related work in section 2. In section 3, we give a brief account of [38]. We describe our approaches in section 4. In section 5, we report extensive experiments comparing DNNs and kernel methods on the problems in image and automatic speech recognition. We conclude and discuss future directions in section 6.
2 Related work
The computational complexity of kernel methods, such as support vector machines, depends quadratically on the number of training examples at training time and linearly on the number of training examples at the testing time. Hence, scaling up kernel methods has been a longstanding and actively studied problem. [8] summarizes several earlier efforts in this vein.
With clever implementation tricks such as computation caching (for example, keeping only a small portion of the very large kernel matrix inside the memory), earlier kernel machines can cope with hundreds of thousands of samples [46, 17]. [7] provides an excellent account of various design considerations.
To further reduce the dependency on the number of training samples, a more effective strategy is to actively select training samples [6]. An early version of this idea was reflected in the Sequential Minimal Optimization (SMO) algorithm [37]. With more sophistication, this technique was extended to enable training SVMs on 8 million samples [33]. Alternative approaches exploit the equivalence between SVMs and sparse greedy approximation and solve SVMs approximately with a smaller subset of examples called coresets^{3}^{3}3We also experimented those techniques though we were not able to identify significant empirical success. [49, 12]. Exploiting structures of the kernel matrix can scale kernel methods to 2 million to 50 million training samples [47]. Note that at the time of publication, none of the abovementioned methods had been directly compared to DNNs.
Instead of reducing the number of training samples, we can reduce the dimensionality of kernel features. In theory, those features are infinite dimensional. But for any practical problem, the dimensionality is bounded above by the number of training samples. The main idea is then to directly use such features, after dimensionality reduction, to construct classifiers (i.e., solving the optimization problem of SVM in the primal space).
Thus far, approximating kernels with finitedimensional features has been recognized as a promising way of scaling up kernel methods. The most relevant one to our paper is the early observation by Rahimi and Recht that inner products between features derived from random projections can be used to approximate translationinvariant kernels, a direct result of spectral analysis of positive functions [5, 43, 38]. Their followup work of using those random features — weighted random kitchen sink [39] — is a major inspiration to our work.
Since, there has been a growing interest in using random projections to approximate different kernels [25, 19, 31, 50]. For example, [15] studied how to use random features for online learning. We note that the amount of time for such classifiers to make a prediction depends linearly on the number of training samples. This could be a concern when the number of training samples is large.
In spite of these progresses, there have been only a few reported largescale empirical studies of those techniques on challenging tasks from speech recognition and computer vision, on which DNNs have been highly effective. In the context of automatic speech recognition, examples of directly using kernel methods were reported [18, 9, 24]. However, the tasks were fairly smallscale (for instance, on the TIMIT dataset). Moreover, none of them explores kernel learning as a way of learning new representations. In contrast, one major aspect of our work is to use multiple kernel learning to arrive at new representations so as to reduce the gap between DNNs and kernel methods, cf. section 4.2.
3 Features from random projections
In what follows, we describe the basic idea we have built upon to scale up kernel methods. The technique is based on explicitly and efficiently constructing features — they are generated randomly — whose inner products then approximate kernel functions. Once such features are constructed, they can be used as inputs by any classifier.
3.1 Generate features by random projections
Given a pair of data points and , a positive definite kernel function defines an inner product between the images of the two data points under a (nonlinear) mapping ,
(1) 
where the dimensionality of the resulting mapping can be infinite (in theory).
Kernel methods avoid inference in . Instead, they rely on the kernel matrix over the training samples. When is far greater than , the number of training samples, this trick provides a nice computational advantage. However, when is exceedingly large, this complexity at the quadratic order of becomes impractical.
Rahimi and Recht leverage a classical result in harmonic analysis and provide a fast way to approximate with finitedimensional features [38]:
Theorem 1.
(Bochner’s theorem, adapted from [38]) A continuous kernel is positive definite if and only if
is the Fourier transform of a nonnegative measure.
More specifically, for shiftinvariant kernels such as Gaussian RBF and Laplacian kernels,
(2) 
the theorem implies that the kernel function can be expanded with harmonic basis, namely
(3) 
where is the density of a
dimensional probability distribution. The expectation is computed on complexvalued functions of
and . For realvalued kernel functions, however, they can be simplified to the cosine and sine functions, see below.For Gaussian RBF and Laplacian kernels, the corresponding densities are Gaussian and Cauchy distributions:
(4) 
The harmonic decomposition suggests a samplingbased approach of approximating the kernel function. Concretely, we draw from the distribution and use the sample mean to approximate
(5) 
The random feature vector is thus composed of scaled cosines of random projections
(6) 
where
is a random variable, uniformly sampled from
. Details on the convergence property of this approximation can be found in [38].A key advantage of using approximate features over standard kernel methods is its scalability to large datasets. Learning with a representation is relatively efficient provided that is far less than the number of training samples. For example, in our experiments (cf. section 5), we have million to million training samples, while often leads to good performance.
3.2 Use random features in classifiers
Just as the standard kernel methods (SVMs or kernelized linear regression) can be seen as fitting data with linear models in kernelinduced feature spaces, we can plug in the random feature vector
in just about any (linear) model. In this paper, we focus on using them to construct multinomial logistic regression. Specifically, our model is a special instance of the
weighted sum of random kitchen sinks [39](7) 
where the label can take any value from
. We use multinomial logistic regression mainly because it can deal with a large number of classes and provide posterior probability assignments, needed by the application task (i.e., the speech recognition systems, in order to combine with components such as language models).
4 Our Approaches
To scale up kernel methods, we address two challenges: (1) how to train largescale models in the form of eq. (7); (2) how to learn optimal kernel functions adapted to the data. We tackle the former with a parallel optimization algorithm and the latter by extending the construction of random features initially proposed in [38].
4.1 Parallel optimization for largescale kernel models
While random features and weighted sum of random kitchen sinks have been investigated before, there are few reported cases of scaling up to the problems commonly seen in automatic speech recognition and other domains. For example, in our empirical studies of acoustic modeling (cf. section 5.3), the number of classes is and we often use more than random features to compose . Thus, the linear model eq. (7) has a large number of parameters (about ).
We have developed two major strategies to overcome this challenge. First, we leverage the observation that fitting multinomial logistic regression is a convex optimization problem and adopt the method of stochastic averaged gradient (SAG) for its faster convergence, both theoretically and empirically, over stochastic gradient descent (SGD) [42]. Note that while SGD is widely applicable to both convex and nonconvex optimization problems, SAG is specifically designed for convex optimization and thus wellsuited to our learning setting.
Secondly, we leverage the property that random projections are just random – that is, given a dimensional , any random subset of it is still random. Our idea is then to train a model on each subset of features in parallel and then assemble them together to form a large model.
Specifically, for large (say ), we partition into blocks with each block having a size of (say ). Note that each block corresponds to a different set of random projections sampled from the density . We train multinomial logistic regression models and obtain sets of parameters for each class, ie., . To assemble them, we combine in the spirit of geometric mean of the probabilities (or arithmetic mean of the probabilities)
(8)  
Note that this assembled model can be seen as a dimensional model with parameters of .
We sketch the main argument for the validity of this parallel training procedure, leaving a rigorous proof for future work. The parameters of the weighted sum of random kitchen sink converges in to the true risk minimizer [39]. Thus, for each model of size
, the presoftmax activations (i.e., the logits) converge in
. For such models, the arithmetic mean of logits converge in thus matching up the rate for a dimensional model. Our extensive empirical studies have supported our argument — in virtually all training settings, the assembled models cannot be improved further, attaining the optimum of the corresponding dimensional model.4.2 Learning kernel features
Another advantage of using kernels is to sidestep the problem of feature engineering, i.e., how to select the best basis functions for a task at hand. Essentially, determining what kernel function to use implicitly specifies the basis functions. But then the question becomes: how to select the best kernel function?
One popular paradigm to address the latter problem is multiple kernel learning (MKL) [30, 1, 13, 27]. That is, starting from a collection of base kernels, the algorithm identifies the best subset of them and combines them together to best adapt to the training data, analogous to designing the best features according to the data.
In the following, we show how a few common MKL ideas can benefit from the previously described largescale learning techniques (cf. section 3). While many MKL algorithms are formulated with kernel matrices (and thus are not easily scalable to large problems), we demonstrate how they can be efficiently implemented with the general recipe of random feature approximation. Among them, we show an interesting and novel result on combining kernels with Hadamard products, where the random feature approximation is especially computationally advantageous.
In our empirical studies (detailed in Supplementary Material), we will show that MKL improves methods using a single kernel, and eventually approaches the performance of deep neural networks. Thus, MKL presents an effective and computational tractable alternatives to DNNs, even for largescale problems.
Additive Kernel Combination
Given a collection of base kernels , their nonnegative combination
(9) 
is also a kernel function, provided for any .
Suppose each kernel is approximated with a dimensional random feature vector , as in eq. (5). Then, given the linearity of the combination, the kernel function can be approximated by
(10) 
where is just the concatenation of the scaled . Note that the dimensionality of would be .
There are several ways to exploit this approximation. The first way is to straightforwardly plug into the multinomial logistic regression eq. (7) and optimize over features. The second way is more scalable. For each , we learn an optimal model with parameters for each class . We then learn a set of combination coefficients by optimizing the likelihood model
(11) 
while holding the other parameters fixed. This is a convex optimization with (presumably) a small set of parameters.
While the first approach is more general, however, empirically, we do not observe a strong difference and have adopted the second approach for its scalability.
Multiplicative Kernel Combination
Kernels can also be multiplicatively combined from base kernels:
(12) 
Note that this is a highly nonlinear combination [13]. Unlike the additive combination, to approximate the multiplicative combination of kernels, there does not exist a simple form (such as concatenating) of composing with the approximate features of individual kernels. Nonetheless, We have proved the following theorem as a way to constructing the approximate features for efficiently.
Theorem 2.
Suppose all are translationinvariant kernels such that
(13) 
Then is also translationinvariant such that
(14) 
where the probability measure is given by the convolution of all
(15) 
Moreover, let be a random variable drawn from the corresponding distribution, then
(16) 
Namely, to approximate , one needs only to draw random variables from each individual component kernel’s corresponding density, and use the sum of those variables to compute random features.
The proof of the theorem is in the Suppl.. We note that and have the same dimensionality. Thus, the number of approximating features is independent of the number of kernels, leading to a computational advantage over additive combination.
Kernel composition
Kernels can also be composited. Specifically, if is a kernel function that depends on only the inner products of its arguments, then is also a kernel function. A concrete example is when is the Gaussian RBF kernel and for some mapping
If we approximate both and using the random feature approximation of eq. (5), the composition would be (graphically) equivalent to the following mapping,
(17) 
namely, a onehiddenlayer neural networks with the weight parameters in the layers being completely random. As before, the result of the composite mapping can be used in any classifier as input features.
We also generalize this operation to introduce a linear projection to reduce dimensionality, serving as information bottleneck: . We experimented on two choices.
First, performs PCA (using the sample covariance matrix) on . Note that this implies is an approximate kernel PCA on the original feature space , using the kernel . Secondly, performs supervised dimensionality reduction. One simple choice is to implement Fisher discriminant analysis (FDA) on , which is equivalent to kernel (FDA) on . In our experiments, we have used a different procedure in a similar spirit. Specifically, In particular, we first use as input features to build a multinomial logistic regression to predict its labels. We then perform PCA on the
posterior probabilities. Our choice here is largely due to the consideration of reusing the computations as we often need to estimate the performance of
alone, thus the multinomial classifier built with is readily usable.5 Experimental Results
We validate our approaches of scaling up kernel methods on challenging problems in computer vision and automatic speech recognition (ASR). We conduct extensive empirical studies comparing kernel methods to deep neural networks (DNNs), which perform well in computer vision, and are stateoftheart in ASR. We show that kernel methods attain similarly competitive performance as DNNs – details in section 5.2 and 5.3 (as well as Supplementary Material).
What can we learn from two very different, yet equally competitive, learning models? We report our initial findings on this question (section 5.5). We show that kernel methods and DNNs learn different yet complementary representation of the data. As such, a direct application of this observation is to combine them to obtain better models than either independently.
5.1 General setup
For all kernelbased models, we tune only three hyperparameters: the bandwidths for Gaussian or Laplacian kernels, the number of random projections, and the step size of the (convex) optimization procedure (as adjusting it has a similar effect as earlystopping).
For all DNNs, we tune hyperparameters related to both the architecture and the optimization. This includes the number of layers, the number of hidden units in each layer, the learning rate, the rate decay, the momentum, regularization, etc. We also use unsupervised pretraining and tune hyperparameters for that phase too.
Details about tuning those hyperparameters are described in the Supplementary Material as they are often datasetspecific. In short, model selection for kernel models has significantly lower computational cost. We give concrete measures to support this observation in section 5.4.
Model  kernel  DNN  

Augment training data  no  yes  no  yes 
On validation  0.97  0.77  0.71  0.62 
On test  1.09  0.85  0.69  0.77 
5.2 Computer vision tasks
We experiment on two problems: handwritten digit recognition and object recognition.
Handwritten digit recognition
We extract a dataset MNIST6.7M, from the dataset MNIST8M [33]. MNIST8M is a transformed version of the original MNIST dataset [32]. Concretely, we randomly select 50,000 out of 60,000 images from the MNIST’s training set and extracted the corresponding samples (the original as well as transformed/distorted ones) in MNIST8M, resulting in a total of 6.75 million samples in total, as our training set. We use the remaining 10,000 images from the original training set as a validation set — we purposely avoid using any transformed versions of those 10,000 images as a validation dataset to avoid potential overfitting. We report test error rate on the standard 10,000 MNIST test set.
We also experimented with a data augmentation trick to increase the number of training samples during training. Whenever we encounter a training sample, we corrupt it with masking noise (randomly flipping 1 to 0 in the binary image). We crudely tune the maskout rates, which are either 0.1, 0.2 or 0.3.
Table 1 compares the performance of the best singlekernel based classifier to that of the best DNN. The kernel classifier uses Gaussian kernel, and 150,000 random projections. The best DNN has 4 hidden layers with 1000, 2000, 2000, and 2000 hidden units respectively.
The difference between the kernel model and the DNN is small – about 16 (out of 10,000) misclassified images. Interestingly, on the test data, the kernel model benefits from the data augmentation trick while the DNN does not. Possibly, the DNN overfits to the validation dataset.
Model  kernel  DNN  

Augment training data  no  yes  no  yes 
On validation  43.2  41.4  42.9  43.2 
On test  43.9  42.2  43.3  44.0 
Object recognition
For this task, we experiment on the database CIFAR10 [28]. The dataset contains 50,000 training samples and 10,000 test samples. Each sample is a RGB image of pixels in one of 10 object categories. We randomly picked 10,000 images from the training set for validation, keeping the remaining 40,000 images for training. We did not perform any preprocessing to the images as we want to relate our findings to previously published results which often do not preprocess data or do not report all specific details in preprocessing. We also experimented with an augmented version of the dataset, by injecting Gaussian noise to images during training.
Table 2 compares the performance of the best singlekernel based classifier to that of the best DNN. The kernel classifier uses Gaussian kernel, and 4,000,000 random projections. The best DNN has 3 hidden layers with 4000, 2000, 2000 hidden units respectively.
The best kernel model performs slightly better than the DNN. They both outperform previously reported DNN results on this dataset, whose error rates are between 44.4% and 48.5% [41, 28, 40].
Convolutional neural nets (CNNs) can significantly outperform DNNs on this dataset. However, we do not compare to CNNs as our kernel models (as well as DNNs) do not construct feature extractions with prior knowledge, while CNNs are designed especially for object recognition.
5.3 Automatic speech recognition (ASR)
Task and evaluation metric
Deep neural nets (DNNs) have been very successfully applied to ASR. There, DNNs perform the task of acoustic modeling. Acoustic modeling is analogous to the conventional multiclass classification, that is, to learn a predictive model to assign phoneme contextdependent phoneme state labels to short segments of speech, called frames, represented as acoustic feature vectors . Acoustic feature vectors are extracted from frames and their context windows (i.e., neighboring frames in temporal proximity).
Analogously, kernelbased multinomial logistic regression models, as described in section 3.2, are also used as acoustic models and compared to DNNs.
Acoustic models are often evaluated in conjunction with other components of ASR systems. In particular, speech recognition is inherently a sequence recognition problem. Thus, perplexity and classification accuracies — commonly used for conventional multiclass classification problems — provide only a proxy (and intermediate goals) to the sequence recognition error. To measure the latter, a full ASR pipeline is necessary where the posterior probabilities of the phoneme states are combined with the probabilities of the language models (of the interested linguistic units such as words) to yield the most probable sequence of those units. A best alignment with the groundtruth sequence is computed, yielding token error rates (TER).
Given the inherent complexity, in what follows, we summarize the empirical studies of applying both paradigms to the acoustic modeling task. We will report TER on two different languages. Details are presented in the Supplementary Material, including comparisons in terms of both perplexity and accuracy for different models. We begin by describing the datasets, followed by a brief description of various kernel and DNN models we have experimented with.
Datasets
We use two datasets: the IARPA Babel Program Cantonese (IARPAbabel101v0.4c) and Bengali (IARPAbabel103bv0.4b) limited language packs. Each pack contains a 20hour training, and a 20hour test sets. We designate about 10% of the training data as a heldout set to be used for model selection and tuning.
We follow the same procedure to extract acoustic features from raw audio data as in the previous work using DNNs for ASR [26]. In particular, we have used IBM’s proprietary system Attila which is adapted for the abovementioned Babel language packs. The acoustic features are 360dimensional realvalued dense vectors. There are 1000 (nonoverlapping) phoneme contextdependent state labels for each language pack. For Cantonese, there are about 7.5 million data points for training, 0.9 million for heldout, and 7.2 million for test, and on Bengali, 7.7 million for training, 1.0 million for heldout and 7.1 million for test. For Bengali, the TER metric is the worderrorrate (WER) and for Cantonese, it is charactererrorrate (CER).
Model  Bengali  Cantonese 

ibm  70.4  67.3 
rbm  69.5  66.3 
best kernel model  70.0  65.7 
Various of models being experimented
IBM’s Attila ASR system has a DNN acoustic model that contains five hiddenlayers, each of which contains 1,024 units with logistic nonlinearities. We refer to this system as ibm
. We have developed another version of DNN, following the original Restricted Boltzman Machine (RBM)based training procedure for learning DNNs
[22]. Specifically, the pretraining is unsupervised. We have trained DNNs with 1, 2, 3 and 4 hidden layers, and 500, 1000, and 2000 hidden units per layer (thus, 12 architectures per language). We refer to this system as rbm.For kernelbased acoustic models, we used Gaussian RBF, Laplacian kernels or some forms of combinations. The only hyperparameter there to tune is the kernel bandwidth, which ranges from 0.3  5 median of the pairwise distances in the data (Typically, the median works well.), the number of random projections ranging from 2,000 to 400,000 (though a stable performance is often observed at 25,000 or above). For training with very large number of features, we used the parallel training procedure, described in section 4.1. For optimization, we used the stochastic average gradient and tune the step size loosely from 4 values .
Details about these systems are in Supplementary Material.
Results
Table 3 reports the best performing models measured in TER. The RBMtrained DNN (rbm), which has 4 hidden layers and 2000 units in each layer, performs the best on Bengali. But our best kernel model, which uses Gaussian RBF kernel and 150,000 – 200,000 random projections, performs the best on Cantonese. Both perform better than IBM’s DNN. On Cantonese, the improvement of the kernel model over ibm is noticeably substantial ( reduction in absolute).
5.4 Computational efficiency
In contrast to DNNs, kernel models can be more efficiently developed. We illustrate this on two aspects: the computational cost of training a single model and the cost of model selection (i.e., hyperparameter tuning)
Cost of training a single model
While the amount of training time depends on several factors, including the volume and the dimensionality of the dataset, the choice of hyperparameters and their effect on convergence, implementation details, etc. We give a rough picture after controlling those extraneous factors as much as possible.
We implement both methods with highly optimized Matlab codes (comparable to our CUDA C implementation) and utilize a single GPU (NVidia Tesla K20m). The timing results reported below are obtained from training acoustic models on the Bengali language dataset.
For a kernel model with 25,000 random projections (25 million model parameters), convergence is reached in less than 20 epochs, with an average of 15 minutes per epoch. In contrast, a competitive deep model with four hidden layers of 2,000 hidden units (15 million parameters), if initialized with pretrained parameters, reaches convergence in roughly 10 epochs, with an average of 28 minutes per epoch. (The pretraining requires additional 12 hours.)
Thus, the training time for a single kernel model is about the same as that for a DNN. This holds for a range of datasets and configurations of hyperparameters.
Cost of model selection
The number of kernel models to be tuned, is significantly (at least one order in magnitude ) less than DNNs. There are only two hyperparameters to search when selecting kernel models: the kernel bandwidth and the learning rate. Generally, the higher the number of random projections, the better the performance is.
For DNNs, the number of hyperparameters needed to be tuned is substantially more. As previously mentioned, in our experiments, we tuned those related to the network architecture and optimization procedure, for both pretraining and finetuning. As such, it is fairly common to select the best DNN among hundreds to thousands of them.
Combining both factors, kernel models are especially appealing when they are used to tackle new problem settings where there is only a weak knowledge about what the optimal hyperparameters are or what the proper ranges for those parameters are. To develop DNNs in this scenario, one would be forced to combinatorially adjust many knobs while kernel approaches are simple and straightforward.
Dataset  MNIST6.7  CIFAR10  Bengali  Cantonese 

Best single  0.69  42.2  69.5  65.7 
Combined  0.61  40.3  69.1  64.9 
5.5 Do kernel and deep models learn the same thing?
Given their matching performances, do kernel and DNN models learn the same knowledge from data?
We report in the following our initial findings. We first combine the best performing models from each paradigm — we use weighted sum of presoftmax activations (i.e., logits). Table 4 summarizes those results across 4 tasks we have studied in the previous sections. In the row of “best single”, blue color indicates the number is from a kernel model while red from a DNN. Clearly, combining the two improves either one independently.
These improvements suggest that despite being close in error rates, the two models are still different enough. We gain more intuitive understanding by visualizing what are being learnt by each model. To this end, we project each model’s presoftmax activations onto the 2D plane with tSNE.
Fig. 1
contrasts the embeddings for 1000 samples from MNIST6.7M’s test set. Each data point is a dot and the color encodes its label. Due to the low classification error rates, it is not surprising to find that there are 10 well separated clusters, one for each of the 10 digits. However, the relative positioning of those clusters is noticeably different between DNN and kernel. It is not clear the embeddings can be transformed into each other with linear transformations. This seems to suggest each method has its unique way of
nonlinearly embedding data. Elucidating more precisely is our future research direction.6 Conclusion
We propose techniques to scale up kernel methods to large learning problems that are commonly found in speech recognition and computer vision. We have shown that the performance of those large kernel models approaches or surpasses their deep neural networks counterparts, which have been regarded as the stateoftheart. Future direction of our research include understanding the difference of these two camps of methods, for instance, in learning new representations of data.
Acknowledgement
F. S. is grateful to Lawrence K. Saul (UCSD), Léon Bottou (Microsoft Research), Alex Smola (CMU), and Chris J. C. Burges (Microsoft Research) for many fruitful discussions and pointers to relevant work.
Computation for the work described in this paper was partially supported by the University of Southern California s Center for HighPerformance Computing (http://hpc.usc.edu).
This work was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Defense U.S. Army Research Laboratory (DoD / ARL) contract number W911NF12C0012. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoD/ARL, or the U.S. Government.
Additionally, A. B. G. is partially supported by a USC Provost Graduate Fellowship. F. S. is partially supported by a NSF IIS1065243, a Google Research Award, an Alfred. P. Sloan Research Fellowship and an ARO YIP Award (W911NF1210241).
References
 Bach et al. [2004] Francis R. Bach, Gert R. G. Lanckriet, and Michael I. Jordan. Multiple kernel learning, conic duality, and the SMO algorithm. In Proc. of the 21th Intl. Conf. on Mach. Learn. (ICML), 2004.
 Bengio et al. [2009] Y. Bengio, D. Schuurmans, J.D. Lafferty, C.K.I. Williams, and A. Culotta, editors. Advances in Neural Information Processing Systems 22, 2009.
 Bengio [2009] Yoshua Bengio. Learning deep architectures for ai. Foundations and Trends in Machine Learning, 2(1):1–127, January 2009.
 Bengio et al. [2013] Yoshua Bengio, Aaron C. Courville, and Pascal Vincent. Representation learning: a review and new perspectives. IEEE Trans. on Pattern Anal. & Mach. Intell., 35(8):1798–1828, 2013.
 Berg et al. [1984] Christian Berg, Jens Peter Reus Christensen, and Paul Ressel. Harmonic Analysis on Semigroups. Springer, 1984.
 Bottou [2014] Léon Bottou. Personal communication, 2014.
 Bottou and Lin [2007] Léon Bottou and ChihJen Lin. Support vector machine solvers. In Bottou et al. [8].
 Bottou et al. [2007] Léon Bottou, Olivier Chapelle, Dennis DeCoste, and Jason Weston, editors. Large Scale Kernel Machines. MIT Press, Cambridge, MA., 2007.
 Cheng and Kingsbury [2011] ChihChieh Cheng and B. Kingsbury. Arccosine kernels: Acoustic modeling with infinite neural networks. In Acoustics, Speech and Signal Processing (ICASSP), 2011 IEEE International Conference on, pages 5200–5203, 2011.

Cho et al. [2011]
K. Cho, A. Ilin, and T. Raiko.
Improved learning of gaussianbernoulli restricted boltzmann machines.
In Proceedings of the International Conference on Artificial Neural Networks (ICANN 2011), pages 10–17, 2011.  Cho and Saul [2009] Youngmin Cho and Lawrence K. Saul. Kernel methods for deep learning. In Bengio et al. [2], pages 342–350.
 Clarkson [2010] Kenneth L. Clarkson. Coresets, sparse greedy approximation, and the frankwolfe algorithm. ACM Trans. Algorithms, 6(4):63:1–63:30, 2010.
 Cortes et al. [2009] Corinna Cortes, Mehryar Mohri, and Afshin Rostamizadeh. Learning nonlinear combinations of kernels. In Bengio et al. [2], pages 396–404.
 Cortes et al. [2014] Corinna Cortes, Neil Lawrence, and Kilian Weinberger, editors. Advances in Neural Information Processing Systems 27, 2014.
 Dai et al. [2014] Bo Dai, Bo Xie, Niao He, Yingyu Liang, Anant Raj, MariaFlorina Balcan, and Le Song. Scalable kernel methods via doubly stochastic gradients. In Cortes et al. [14].
 Dasgupta and McAllester [2013] Sanjoy Dasgupta and David McAllester, editors. Proc. of the 30th Int. Conf. on Mach. Learn. (ICML), volume 28 of JMLR W & CP, 2013.
 DeCoste and Schölkopf [2002] Dennis DeCoste and Bernhard Schölkopf. Training invariant support vector machines. Mach. Learn., 46:161–190, 2002.
 Deng et al. [2012] Li Deng, Gökhan Tür, Xiaodong He, and Dilek Z. HakkaniTür. Use of kernel deep convex networks and endtoend learning for spoken language understanding. In 2012 IEEE Spoken Language Technology Workshop (SLT), Miami, FL, USA, December 25, 2012, pages 210–215, 2012.
 Hamid et al. [2014] Raffay Hamid, Ying Xiao, Alex Gittens, and Dennis DeCoste. Compact random feature maps. In Dasgupta and McAllester [16], pages 19 – 27.

Hinton [2002]
G. E Hinton.
Training products of experts by minimizing contrastive divergence.
Neural computation, 14(8):1771–1800, 2002.  Hinton et al. [2012a] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdelrahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara Sainath, and Brian Kingsbury. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97, 2012a.
 Hinton et al. [2006] Geoffrey E. Hinton, Simon Osindero, and YeeWhye Teh. A fast learning algorithm for deep belief nets. Neual Comp., 18(7):1527–1554, 2006.
 Hinton et al. [2012b] Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R. Salakhutdinov. Improving neural networks by preventing coadaptation of feature detectors. arXiv:1207.0580, July 2012b. URL http://arxiv.org/abs/1207.0580.
 Huang et al. [2014] PoSen Huang, Haim Avron, Tara N Sainath, Vikas Sindhwani, and Bhuvana Ramabhadran. Kernel methods match deep neural networks on TIMIT. In Proc. of the 2014 IEEE Intl. Conf. on Acou., Speech and Sig. Proc. (ICASSP), volume 1, page 6, 2014.
 Kar and Karnick [2012] Purushottam Kar and Harish Karnick. Random feature maps for dot product kernels. In Proc. of the 29th Intl. Conf. on Mach. Learn. (ICML), 2012.
 Kingsbury et al. [2013] Brian Kingsbury, Jia Cui, Xiaodong Cui, Mark J. F. Gales, Kate Knill, Jonathan Mamou, Lidia Mangu, David Nolden, Michael Picheny, Bhuvana Ramabhadran, Ralf Schlüter, Abhinav Sethy, and Phlip C. Woodland. A highperformance Cantonese keyword search system. In Proc. of the 2013 IEEE Intl. Conf. on Acou., Speech and Sig. Proc. (ICASSP), pages 8277–8281, 2013.
 Kloft et al. [2011] Marius Kloft, Ulf Brefeld, Sören Sonnenburg, and Alexander Zien. lnorm multiple kernel learning. Journal of Machine Learning Research, 12:953–997, 2011.
 Krizhevsky and Hinton [2009] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images, 2009.
 Krizhevsky et al. [2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. Imagenet classification with deep convolutional neural networks. In Pereira et al. [36], pages 1097–1105.
 Lanckriet et al. [2004] Gert R. G. Lanckriet, Nello Cristianini, Peter L. Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27–72, 2004.
 Le et al. [2014] Quoc Viet Le, Tamás Sarlós, and Alexander Johannes Smola. Fastfood: Approximating kernel expansions in loglinear time. In Dasgupta and McAllester [16].

LeCun and Cortes [1998]
Y. LeCun and C. Cortes.
The mnist database of handwritten digits, 1998.
 Loosli et al. [2007] Gaëlle Loosli, Stéphane Canu, and Léon Bottou. Training invariant support vector machines using selective sampling. In Bottou et al. [8].

Mohamed et al. [2012]
Abdelrahman Mohamed, George Dahl, , and Geoffrey Hinton.
Acoustic modeling using deep belief networks.
IEEE Transactions on Audio, Speech, and Language Processing, 20(1):14–22, 2012.  Neal [1994] R. Neal. Priors for infinite networks. Technical Report CRGTR941, Dept. of Computer Science, University of Toronto, 1994.
 Pereira et al. [2012] F. Pereira, C.J.C. Burges, L. Bottou, and K. Q. Weinberger, editors. Advances in Neural Information Processing Systems 25, 2012.
 Platt [1998] John C. Platt. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods  Support Vector Learning. MIT Press, 1998.
 Rahimi and Recht [2007] Ali Rahimi and Benjamin Recht. Random features for largescale kernel machines. In Advances in Neural Information Processing Systems 20, pages 1177–1184, 2007.
 Rahimi and Recht [2008] Ali Rahimi and Benjamin Recht. Weighted sums of random kitchen sinks: Replacing minimization with randomization in learning. In Advances in Neural Information Processing Systems 21, pages 1313–1320, 2008.

Raiko et al. [2012]
T. Raiko, H. Valpola, and Y. LeCun.
Deep learning made easier by linear transformations in perceptrons.
InInternational Conference on Artificial Intelligence and Statistics
, pages 924–932, 2012.  Rifai et al. [2011] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio. Contractive autoencoders: Explicit invariance during feature extraction. In Proceedings of the 28th International Conference on Machine Learning (ICML11), pages 833–840, 2011.
 Roux et al. [2012] Nicolas L. Roux, Mark Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Pereira et al. [36], pages 2663–2671.
 Schölkopf and Smola [2002] B. Schölkopf and A. Smola. Learning with kernels. MIT Press, 2002.
 Seide et al. [2011a] Frank Seide, Gang Li, Xie Chen, and Dong Yu. Feature engineering in contextdependent deep neural networks for conversational speech transcription. In Automatic Speech Recognition and Understanding (ASRU), 2011 IEEE Workshop on, pages 24–29, 2011a.
 Seide et al. [2011b] Frank Seide, Gang Li, and Dong Yu. Conversational speech transcription using contextdependent deep neural networks. In Proc. of Interspeech, pages 437–440, 2011b.
 Smola [2014] Alex Smola. Personal communication, 2014.
 Sonnenburg and Franc [2010] Sören Sonnenburg and Vojtech Franc. COFFIN: A computational framework for linear SVMs. In Proc. of the 27th Intl. Conf. on Mach. Learn. (ICML), pages 999–1006, Haifa, Israel, 2010. URL http://www.icml2010.org/papers/280.pdf.
 Szegedy et al. [2014] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper with convolutions. In Cortes et al. [14].
 Tsang et al. [2005] Ivor W. Tsang, James T. Kwok, and PakMing Cheung. Core vector machines: Fast SVM training on very large data sets. Journal of Machine Learning Research, 6:363–392, 2005. URL http://www.jmlr.org/papers/v6/tsang05a.html.
 Vedaldi and Zisserman [2012] A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. IEEE Trans. on Pattern Anal. & Mach. Intell., 34(3):480–492, 2012.
 Williams [1996] C. K. I. Williams. Computing with infinite networks. In Advances in Neural Information Processing Systems 19, pages 599–621, 1996.
Appendix A Proof of the Theorem 2
Theorem 2.
Suppose all are translationinvariant kernels such that
Then is also translationinvariant such that
where the probability measure is given by the convolution of s
Moreover, let be a random variable drawn from the corresponding distribution, then
Namely, to approximate , one needs only to draw random variables from each individual component kernel’s corresponding density, and use the sum of those variables to compute random features.
Proof.
Denote .
For translationinvariant kernel, we have
The product of the kernels is,
which is also translationinvariant.
We have used the fact (due to convolution theorem) that
It means we have found a new distribution as the random projection generating distribution for the new kernel
From the definition of , in order to sample from , we can simply use the sum of independent samples from . ∎
Appendix B Detailed experimental Results
b.1 Image recognition
We first provide details on our empirical studies on challenging problems in image recognition.
b.1.1 Handwritten digit recognition
Dataset and Preprocessing
Dataset is described in section 5.2 of the main text. We scale the input between by dividing 256.
Kernel
We use Gaussian RBF and Laplacian kernels with kernel bandwidth selected from median of the pairwise distance in data. We select the learning rate from {}. The random feature dimension we have used is 150,000. Performance with different dimension is shown in Table 5.
Dnn
We trained DNNs with or hidden layers, with 1000, 2000, 2000 and 2000 hidden units respectively. We firstly pretrained 1 GaussianBernoulli and 3 consecutive Bernoulli restricted Boltzmann machines (RBMs), all using Stochastic Gradient Descent (SGD) with Contrastive Divergence (CD1) algorithm [20].
We select learning rate from {}, momentum from {0.5, 0.9} and set L2 regularization to for 2 epochs of pretraining. In finetuning, we tune SGD with learning rate from {}, momentum from {0.7, 0.9}. We decrease the learning rate by a factor of 0.99 for every epoch and set minibatch size to 100, L2 regularization to 0. We use earlystopping to control overfitting. When trained with data augmentation, we use smaller learning rate and run for more epochs.
Data Augmentation
We use maskout noise with ratio {0.1, 0.2, 0.3} for both kernel methods and DNN.
Results
Table 6 compares the performance of kernel methods to deep neural nets of different architectures. The best result of DNN is a 4hiddenlayer neural network. Deep nets generally have slightly smaller test errors. Kernel models benefit more from data augmentation and achieve similar error rates.
Kernel type  Data aug.  10K  50K  100K  150K 
Gaussian  No  1.45/1.42  1.03/1.25  0.98/1.12  0.97/1.09 
Laplacian  No  1.93/1.93  1.21/1.34  1.16/1.17  1.10/1.13 
Gaussian  Yes    0.83/1.03  0.79/0.92  0.77/0.85 
Model  Original  Augmented  

Validation  Test  Validation  Test  
kernel  0.97  1.09  0.77  0.85 
4 hidden  0.71  0.69  0.64  0.80 
3 hidden  0.78  0.73  0.74  0.77 
2 hidden  0.76  0.71  0.64  0.79 
1 hidden  0.84  0.95  0.79  0.76 
PCA Embedding
In addition to tSNE visualization, we also project each model’s presoftmax activation onto the first two principle components given by PCA. Fig. 2 contrasts the PCA embeddings for 1000 samples from MNIST6.7M’s test set.
Neural network seems to give more spread out embeddings compared to kernel machine. The most noticeable class is digit 3 (light blue in lower left corner) and digit 6 (green on the right).
b.1.2 Object Recognition
Dataset and Preprocessing
Dataset is described in section 5.2 of the main text. We scale the input between by dividing 256. No other preprocessing is used because we would like to relate to previously reported results on DNNs on this data where preprocessing is not applied.
Kernel
Gaussian kernel is used. We achieve the best performance by using 4,000,000 random features. This is done by training 200K single models in parallel and then combine. Table 7 shows the performance of kernel with respect to different number of random features. Similarly, we select bandwidth from {0.5,1,1.5,2,3} median distance and learning rate from .
Dnn
We trained DNNs with 1 to 4 hidden layers, with 2000 hidden units per layer. In pretraining, we use a Gaussian RBM for the input layer and three Bernoulli RBMs for intermediate hidden layers using CD1 algorithm. (We adopt the parameterization of GRBM in [10], which shows better performance. )
For Gaussian RBM, we tune learning rate from {}, momentum from {0.2, 0.5, 0.9}^{4}^{4}4Momentum can be increased to another choice from the three after 5 epochs., and L2 regularization from {}. For Bernoulli RBM, we tune learning rate from {}, momentum from {0.2, 0.5, 0.9}, and L2 regularization is {}. In finetuning, we tune SGD with learning rate from {}, momentum from {0.2, 0.5, 0.9}, decrease the learning rate by a factor of 0.9 for every 20 or 50 epochs and set minibatch size to 50, L2 regularization to 0. We use earlystopping to control overfitting. When trained with data augmentation, we use smaller learning rate and run for more epochs.
We used constant learning rate throughout 30 epochs and update momentum after 5 epochs. In finetuning stage, we used stochastic gradient descent with 0.9 momentum, fixed learning rate schedule, decreasing the learning rate by 10 after 50 epochs. The optimal model is selected according to the classification accuracy on validation dataset. We trained DNNs with 1, 2, 3 and 4 hidden layers with 2000 hiddenunits per layer. Overfitting was observed after we increased model from 3 hidden layers to 4 hidden layers.
Data Augmentation
We apply additive Gaussian noise with standard deviation {0.1, 0.2, 0.3} on raw pixel for both kernel methods and DNN.
Results
Table 8 contrasts results of kernel models and DNN. We observe overfitting for 4 hidden layer neural network, and achieve best results in a 3 hidden layer architecture. Deeper models start to overfit and give worse validation and test performance. Kernel models achieve the best error when data augmentation is used.
Gaussian r.f.  Original  Augmented  

Validation  Test  Validation  Test  
200K  43.74  44.48  42.15  43.13 
1M  43.43  44.08  41.62  42.38 
2M  43.26  44.04  41.47  42.26 
4M  43.22  43.93  41.36  42.23 
Model  Original  Augmented  

Validation  Test  Validation  Test  
kernel  43.22  43.93  41.36  42.23 
4 hidden  43.21  43.74  43.00  43.38 
3 hidden  42.89  43.29  42.93  43.35 
2 hidden  43.30  43.76  43.80  44.81 
1 hidden  48.40  48.94  47.28  47.79 
b.2 Automatic speech recognition
In what follows, we provide comprehensive details on our empirical studies on challenging problems in automatic speech recognition.
b.2.1 Tasks, datasets and evaluation metrics
Task
We have selected the task of acoustic modeling, a crucial component in automatic speech recognition. In its most basic form, acoustic modeling is analogous to the conventional multiclass classification, that is, to learn a predictive model to assign phoneme contextdependent state labels to short segments of speech, called frames. While speech signals are highly nonstationary and contextsensitive, acoustic modeling addresses this issue by using acoustic features extracted from context windows (i.e., neighboring frames in temporal proximity) to capture the transient characteristics of the signals.
Data characteristics
To this end, we use two datasets: the IARPA Babel Program Cantonese (IARPAbabel101v0.4c) and Bengali (IARPAbabel103bv0.4b) limited language packs. Each pack contains a 20hour training, and a 20hour test sets. We designate about 10% of the training data as a heldout set to be used for model selection and tuning (i.e., tuning hyperparameters etc). The training, heldout, and test sets contain different speakers. The acoustic data is very challenging as it is twoperson conversations between people who know each other well (family and friends) recorded over telephone channels (in most cases with mobile telephones) from speakers in a wide variety of acoustic environments, including moving vehicles and public places. As a result, it contains many natural phenomena such as mispronunciations, disfluencies, laughter, rapid speech, background noise, and channel variability. Compared to the more familiar TIMIT corpus, which contains about 4 hours of training data, the Babel data is substantially more challenging because the TIMIT data is read speech recorded in a wellcontrolled, quiet studio environment.
As is standard on previous work using DNNs for speech recognition, the data is preprocessed using Gaussian mixture models to give alignments between phoneme state labels and 10millisecondframes of speech
[26]. The acoustic features are 360dimensional realvalued dense vectors. There are 1000 (nonoverlapping) phoneme contextdependent state labels for each language pack. For Cantonese, there are about 7.5 million data points for training, 0.9 million for heldout, and 7.2 million for test, and on Bengali, 7.7 million for training, 1.0 million for heldout and 7.1 million for test.Evaluation metrics
We will be reporting 3 evaluation metrics, typically found in mainstream speech recognition research.
Perplexity Given a set of examples, , the perplexity is defined as
The perplexity measure is lower bound by 1 when all predictions are perfect: for all samples. With random guessing , where is the number of classes, the perplexity attains .
We use the perplexity measure on the heldout for model selection and tuning. This is because the perplexity is often found to be correlated with the next two performance measures.
Accuracy The classification accuracy is defined as
Token Error Rate (TER) Speech recognition is inherently a sequence recognition problem. Thus, perp and acc provide only proxy (and intermediate goals) to the sequence recognition error. To measure the latter, a full automatic speech recognition pipeline is necessary where the posterior probabilities of the phoneme labels are combined with the probabilities of the language models (of the interested linguistic units such as words) to yield the most probable sequence of those units. A best alignment with the groundtruth sequence is computed, yielding token error rates. For Bengali, the token error rate is the worderrorrate (WER) and for Cantonese, it is charactererrorrate (CER).
Because it entails performing speech recognition, obtaining TER is computationally costly thus it is rarely used for model selection and tuning. Note also that the token error rates obtained on the Babel tasks are much higher than those are reported for other conversational speech tasks such as Switchboard or Broadcast News. This is because we have much less training data for Babel than for the other tasks. This lowresource setting is an important one in the speech processing area, given that there are a large number of languages in the world for which speech and language models do not currently exist.
b.2.2 Deep neural nets acoustic models
There are many variants of DNNs techniques. We have decided to choose two flavors that are very different in learning from data, in order to have a broader comparison. In either case, our model tuning is extensive.
IBM’s DNN
We have used IBM’s proprietary system Attila for the conventional speech recognition that is adapted for the abovementioned Babel task. A detailed description appears in [26]. Attila contains a stateoftheart acoustic model provided by IBM. It also powers our full ASR pipeline in order to compute token error rate (TER). We have also used it to convert raw speech signals into acoustic features. Concretely, the features at a frame is a 40dimensional speakeradapted representation that has previously been shown to work well with DNN acoustic models [26]. Features at 8 neighboring contextual frames are concatenated, yield 360dimensional features. We have used the same features for our kernel methods.
IBM’s DNN acoustic model contains five hiddenlayers, each of which contains 1,024 units with logistic nonlinearities. The output is a softmax nonlinearity with 1,000 targets that correspond to quinphone contextdependent HMM states clustered using decision trees. All layers in the DNN are fully connected. The training of the DNN occurs in two stages. First, a greedy layerwise discriminative pretraining
[45] to set the weights for each layer in a reasonable range. Then, the crossentropy criterion is minimized with respect to all parameters in the network, using stochastic gradient descent with a minibatch size of 250 samples, without momentum, and with annealing the learning rate based on the reduction in crossentropy loss on a heldout set.RbmDnn
We have designed another version of DNN, following the original Restricted Boltzman Machine (RBM)based training procedure for learning DNNs[22]. Specifically, the pretraining is unsupervised. We have trained DNNs with 1, 2, 3 and 4 hidden layers, and 500, 1000, and 2000 hidden units per layer (thus, totally 12 architectures per language).
The first hidden layer is a Gaussian RBM and the upper layers are BinaryBernoulli RBM. In pretraining, we use 5 epochs of SGD with Contrastive Divergence (CD1) algorithm on all training data. We tuned 3 hyperparameters, which are learning rate, momentum, and the strength for an regularizer. For finetuning, we used error backpropagation. We tuned the initial learning rate, learning rate decay, momentum and the strength for another regularizer. The finetuning usually converges in 10 epochs.
b.2.3 Kernel acoustic models
The development of kernel acoustic models does not require combinatory searching over many factors. We experimented only two types of kernels: Gaussian RBF and Laplacian kernels. The only hyperparameter there to tune is the kernel bandwidth, which ranges from 0.3  5 median of the pairwise distances in the data. (Typically, the median works well.)
The random feature dimensions we have used ranging from 2,000 to 400,000, though a stable performance is often observed at 25,000 or above. For training with very large number of features, we used the parallel training procedure, described in section 4.1 of the main text.
All kernel acoustic models are multinomial logistic regression, thus optimized by convex optimization. As mentioned in section 4.1 of the main text, we use Stochastic Average Gradient (SAG), which efficiently leverages the convexity property. We do tune the step size, selected from a loose range of 4 values .
For additive and multiplicative kernel combinations, we combine only two, one Gaussian and the other Laplacian. For additive combinations, we first train two models, one for each kernel. The combining coefficient is selected from . For composite kernels, we compose Gaussian with Laplacian. We perform a supervised dimensionality reduction, as described in section 4.2 of the main text. The reduced dimensionality is chosen from 50, 100, or 360. The first kernel’s bandwidth is greedily selected to be optimal as a singlekernel acoustic model. The other kernel’s bandwidth is selected after composing the features.
Bengali  Cantonese  

Model  perp  acc (%)  perp  acc (%) 
ibm  3.4/3.5  71.5/71.2  6.8/6.16  56.8/58.5 
rbm  3.3/3.4  72.1/71.6  6.2/5.7  58.3/59.3 
1k  3.7/3.8  70.1/69.7  6.8/6.2  57.0/58.3 
a2k  3.6/3.8  70.3/70.0  6.7/6.0  57.1/58.5 
m2k  3.7/3.8  70.3/69.9  6.7/6.1  57.1/58.4 
c2k  3.5/3.6  71.0/70.4  6.5/5.7  57.3/58.8 
Bengali  Cantonese  

(h, L)  perp  acc (%)  perp  acc (%) 
3.9/3.9  69.2/69.3  7.1/6.4  55.8/57.4  
3.5/3.6  70.9/70.7  6.6/6.1  57.3/58.4  
3.5/3.5  71.2/70.9  6.4/5.9  57.7/58.6  
3.4/3.5  71.2/70.8  6.4/5.9  57.5/58.7  
3.7/3.7  70.1/70.1  6.8/6.2  56.4/58.0  
3.4/3.4  71.6/71.4  6.3/5.8  58.2/59.0  
3.4/3.5  71.7/71.3  6.3/5.7  58.0/59.2  
3.3/3.5  71.8/71.4  6.6/5.8  57.1/58.6  
3.6/3.7  70.5/70.3  6.7/6.1  56.9/58.1  
3.4/3.4  71.8/71.4  6.2/5.7  58.3/59.3  
3.4/3.5  71.5/71.2  6.2/5.6  57.8/59.1  
3.3/3.4  72.1/71.6  6.4/5.8  57.8/59.1 
Bengali  Cantonese  

Dim  perp  acc (%)  perp  acc (%) 
2k  4.4/4.4  66.5/66.8  8.5/7.4  52.7/54.8 
5k  4.1/4.2  67.8/67.8  7.8/7.0  53.9/56.0 
10k  4.0/4.1  68.4/68.3  7.5/6.7  54.9/56.6 
25k  3.8/3.9  69.2/69.0  7.1/6.4  55.9/57.3 
50k  3.8/3.9  69.7/69.4  6.9/6.2  56.5/57.9 
100k  3.7/3.8  70.0/69.6  6.8/6.2  56.8/58.2 
200k  3.7/3.8  70.1/69.7  6.8/6.2  57.0/58.3 
Model  Bengali  Cantonese 

ibm  70.4  67.3 
rbm  69.5  66.3 
1k  70.0  65.7 
a2k  73  68.8 
m2k  72.8  69.1 
c2k  71.2  68.1 
b.2.4 Results on Perplexity and Accuracy
Table 9 concisely contrasts the best perplexity and accuracy attained by various systems: ibm (IBM’s DNN), rbm (RBMtrained DNN), 1k (single kernel based model), a2k (additive combination of two kernels), m2k (multiplicative combination of two kernels) and c2k (composite of two kernels). We report the metrics on both the heldout and the test datasets (the numbers are separated by a /). In general, the metrics are consistent across both datasets and perp correlates with acc reasonably well.
On Bengali, across all systems, the rbm attains the best perplexity (red colored numbers in the table), outperforming ibm and suggesting that unsupervised pretraining is advantageous. The best performing kernel model is c2k, trailing slightly behind rbm and ibm.
Similarly, on Cantonese, rbm performs the best, followed by c2k, both outperforming ibm. As an illustrate example, we show in Table 10 the performance of rbm on Bengali, under different types of architectures ( is the number of hidden layers and the number of hidden units). Meanwhile, in Table 11, we show the performance of single Laplacian kernel acoustic model with different number of random features.
Contrasting these two tables, it is interesting to observe that kernel models use far more parameters than DNNs to achieve similar perplexity and accuracy. For instance, for a rbm with with a perplexity of , the number of parameters is million. This is a fraction of a comparable kernel model with Dim=10k million parameters. In some way, this ratio provides an intuitive measure of the price being convenient, i.e., using random features in kernel models instead of adapting features to the data as in DNNs.
b.2.5 Results on Token Error Rates
Table 12 reports the performance of various models measured in TER, another important and more relevant metric to speech recognition errors.
Note that the RBMtrained DNN (rbm) performs the best on Bengali, but our best kernel model performs the best on Cantonese. Both perform better than IBM’s DNN. On Cantonese, the improvement of our kernel model over ibm is noticeably large ( reduction in absolute).
Table 13 highlights several interesting comparison between rbm and kernel models. Concretely, it seems that DNNs need to be big enough in order to reach the proximity of its best TER. On the other end, the kernel models’ performance plateaus rather quickly. This is the opposite to what we have observed when we compare two methods using perplexity and accuracy.
One possible explanation is that for different models, the relationship between perplexity and TER are different. This is certainly plausible, given TER is highly complex to compute and two different models might explore parameter spaces very differently.
Another possible explanation is that these two different models learn different representations that bias either toward perplexity or toward TER. Table 14 suggests that this might indeed be true: as we combine two different models, we see handsome gains in performance over each individual one’s.
Model  Arch.  TER () 

rbm  73.1  
rbm  72.7  
rbm  72.4  
rbm  72.2  
rbm  69.8  
rbm  69.5  
1k  Dim = 25k  73.1 
1k  Dim = 50k  70.2 
1k  Dim = 100k  70.0 
1k  Dim = 200k  70.0 
Model  Bengali  Cantonese 

best single system  69.5  65.7 
rbm + 1k  69.7  65.3 
rbm + 1k  69.2  64.9 
rbm + 1k  69.1  64.9 
b.2.6 DNN and kernels learn complementary representations
Inspired by what we have observed in the previous section, we set out to analyze in what way the representations learnt by two different models might be complementary. We have obtained preliminary results.
We took a learned DNN (we used the best perform one in terms of TER) and computed its preactivation to the output layer, which is a linear transformation of the last hidden layer’s outputs. For the best performing singlekernel model, we computed the preactivation similarly. Note that since they both predict the same set of labels, the preactivations from either model have the same dimensionality.
We perform PCA on them independently and then visualize in 2D. Fig. 3 displays the two scatter plots where each has 1000 points, representing the means of the learned representations for data points in each class. To visualize easily, we color each point not by its phoneme state labels. Instead, we collapse them into phone labels (which are considerably few, generally around 40  60).
An initial examination seems to suggest that kernel models’ representations tend to form clumps for data from the same class. In the figure, the most obvious observation is the cluster in the blue color. On the other end, those blue color scattered data points do not seem to form a large and tight cluster under the representations learned by the DNNs – they seem to be more spread out.
The clumps seem to be indicative of the Gaussian kernels we have used. However, how important they are and in what way, the more flourish patterns by DNNs’ representations are more advantageous require more careful and detailed analysis. We hope our work has provided enough incentives and tools for that pursuit.
Comments
There are no comments yet.