Atutorialsurveyofarchitectures,algorithms, … ·...

29
SIP (2014), vol. 3, e2, page 1 of 29 © The Authors, 2014. The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution licence http://creativecommons.org/licenses/by/3.0/ doi:10.1017/ATSIP.2013.99 OVERVIEW PAPER A tutorial survey of architectures, algorithms, and applications for deep learning li deng In this invited paper, my overview material on the same topic as presented in the plenary overview session of APSIPA-2011 and the tutorial material presented in the same conference [1] are expanded and updated to include more recent developments in deep learning. The previous and the updated materials cover both theory and applications, and analyze its future directions. The goal of this tutorial survey is to introduce the emerging area of deep learning or hierarchical learning to the APSIPA community. Deep learning refers to a class of machine learning techniques, developed largely since 2006, where many stages of non-linear information processing in hierarchical architectures are exploited for pattern classification and for feature learning. In the more recent literature, it is also connected to representation learning, which involves a hierarchy of features or concepts where higher- level concepts are defined from lower-level ones and where the same lower-level concepts help to define higher-level ones. In this tutorial survey, a brief history of deep learning research is discussed first. Then, a classificatory scheme is developed to analyze and summarize major work reported in the recent deep learning literature. Using this scheme, I provide a taxonomy-oriented survey on the existing deep architectures and algorithms in the literature, and categorize them into three classes: generative, discriminative, and hybrid. Three representative deep architectures – deep autoencoders, deep stacking networks with their generalization to the temporal domain (recurrent networks), and deep neural networks (pretrained with deep belief networks) – one in each of the three classes, are presented in more detail. Next, selected applications of deep learning are reviewed in broad areas of signal and information processing including audio/speech, image/vision, multimodality, language modeling, natural language processing, and information retrieval. Finally, future directions of deep learning are discussed and analyzed. Keywords: Deep learning, Algorithms, Information processing Received 3 February 2012; Revised 2 December 2013 I. INTRODUCTION Signal-processing research nowadays has a significantly widened scope compared with just a few years ago. It has encompassed many broad areas of information process- ing from low-level signals to higher-level, human-centric semantic information [2]. Since 2006, deep learning, which is more recently referred to as representation learning, has emerged as a new area of machine learning research [35]. Within the past few years, the techniques developed from deep learning research have already been impacting a wide range of signal- and information-processing work within the traditional and the new, widened scopes including machine learning and artificial intelligence [1, 58]; see a recent New York Times media coverage of this progress in [9]. A series of workshops, tutorials, and special issues or conference special sessions have been devoted exclu- sively to deep learning and its applications to various clas- sical and expanded signal-processing areas. These include: Microsoft Research, Redmond, WA 98052, USA. Phone: 425-706-2719 Corresponding author: L. Deng Email: [email protected] the 2013 International Conference on Learning Represen- tations, the 2013 ICASSP’s special session on New Types of Deep Neural Network Learning for Speech Recognition and Related Applications, the 2013 ICML Workshop for Audio, Speech, and Language Processing, the 2013, 2012, 2011, and 2010 NIPS Workshops on Deep Learning and Unsupervised Feature Learning, 2013 ICML Workshop on Representa- tion Learning Challenges, 2013 Intern. Conf. on Learning Representations, 2012 ICML Workshop on Representation Learning, 2011 ICML Workshop on Learning Architectures, Representations, and Optimization for Speech and Visual Information Processing, 2009 ICML Workshop on Learn- ing Feature Hierarchies, 2009 NIPS Workshop on Deep Learning for Speech Recognition and Related Applications, 2012 ICASSP deep learning tutorial, the special section on Deep Learning for Speech and Language Processing in IEEE Trans. Audio, Speech, and Language Processing (January 2012), and the special issue on Learning Deep Architectures in IEEE Trans. Pattern Analysis and Machine Intelligence (2013). The author has been directly involved in the research and in organizing several of the events and editorials above, and has seen the emerging nature of the field; hence a need for providing a tutorial survey article here. 1 https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9 Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

Transcript of Atutorialsurveyofarchitectures,algorithms, … ·...

  • SIP (2014), vol. 3, e2, page 1 of 29 © The Authors, 2014.The online version of this article is published within an Open Access environment subject to the conditions of the Creative Commons Attribution licencehttp://creativecommons.org/licenses/by/3.0/doi:10.1017/ATSIP.2013.99

    OVERVIEW PAPER

    A tutorial survey of architectures, algorithms,and applications for deep learningli deng

    In this invited paper, my overview material on the same topic as presented in the plenary overview session of APSIPA-2011 andthe tutorial material presented in the same conference [1] are expanded and updated to include more recent developments indeep learning. The previous and the updatedmaterials cover both theory and applications, and analyze its future directions. Thegoal of this tutorial survey is to introduce the emerging area of deep learning or hierarchical learning to the APSIPA community.Deep learning refers to a class of machine learning techniques, developed largely since 2006, where many stages of non-linearinformation processing in hierarchical architectures are exploited for pattern classification and for feature learning. In the morerecent literature, it is also connected to representation learning, which involves a hierarchy of features or concepts where higher-level concepts are defined from lower-level ones and where the same lower-level concepts help to define higher-level ones. In thistutorial survey, a brief history of deep learning research is discussed first. Then, a classificatory scheme is developed to analyzeand summarize major work reported in the recent deep learning literature. Using this scheme, I provide a taxonomy-orientedsurvey on the existing deep architectures and algorithms in the literature, and categorize them into three classes: generative,discriminative, and hybrid. Three representative deep architectures – deep autoencoders, deep stacking networks with theirgeneralization to the temporal domain (recurrent networks), and deep neural networks (pretrained with deep belief networks) –one in each of the three classes, are presented in more detail. Next, selected applications of deep learning are reviewed in broadareas of signal and information processing including audio/speech, image/vision, multimodality, language modeling, naturallanguage processing, and information retrieval. Finally, future directions of deep learning are discussed and analyzed.

    Keywords: Deep learning, Algorithms, Information processing

    Received 3 February 2012; Revised 2 December 2013

    I . I NTRODUCT ION

    Signal-processing research nowadays has a significantlywidened scope compared with just a few years ago. It hasencompassed many broad areas of information process-ing from low-level signals to higher-level, human-centricsemantic information [2]. Since 2006, deep learning, whichis more recently referred to as representation learning, hasemerged as a new area of machine learning research [3–5].Within the past few years, the techniques developed fromdeep learning research have already been impacting a widerange of signal- and information-processing work withinthe traditional and the new, widened scopes includingmachine learning and artificial intelligence [1, 5–8]; see arecent New York Times media coverage of this progressin [9]. A series of workshops, tutorials, and special issuesor conference special sessions have been devoted exclu-sively to deep learning and its applications to various clas-sical and expanded signal-processing areas. These include:

    Microsoft Research, Redmond, WA 98052, USA. Phone: 425-706-2719

    Corresponding author:L. DengEmail: [email protected]

    the 2013 International Conference on Learning Represen-tations, the 2013 ICASSP’s special session on New Types ofDeepNeural Network Learning for Speech Recognition andRelated Applications, the 2013 ICML Workshop for Audio,Speech, and Language Processing, the 2013, 2012, 2011, and2010NIPSWorkshops onDeep Learning andUnsupervisedFeature Learning, 2013 ICML Workshop on Representa-tion Learning Challenges, 2013 Intern. Conf. on LearningRepresentations, 2012 ICML Workshop on RepresentationLearning, 2011 ICMLWorkshop on Learning Architectures,Representations, and Optimization for Speech and VisualInformation Processing, 2009 ICML Workshop on Learn-ing Feature Hierarchies, 2009 NIPS Workshop on DeepLearning for Speech Recognition and Related Applications,2012 ICASSP deep learning tutorial, the special sectionon Deep Learning for Speech and Language Processingin IEEE Trans. Audio, Speech, and Language Processing(January 2012), and the special issue on Learning DeepArchitectures in IEEE Trans. Pattern Analysis and MachineIntelligence (2013). The author has been directly involvedin the research and in organizing several of the eventsand editorials above, and has seen the emerging natureof the field; hence a need for providing a tutorial surveyarticle here.

    1https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    mailto:[email protected]://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 2 li deng

    Deep learning refers to a class of machine learningtechniques, where many layers of information-processingstages in hierarchical architectures are exploited for pat-tern classification and for feature or representation learn-ing. It is in the intersections among the research areas ofneural network, graphical modeling, optimization, patternrecognition, and signal processing. Three important rea-sons for the popularity of deep learning today are drasticallyincreased chip processing abilities (e.g., GPU units), the sig-nificantly lowered cost of computing hardware, and recentadvances in machine learning and signal/information-processing research. Active researchers in this area includethose at University of Toronto, New York University, Uni-versity of Montreal, Microsoft Research, Google, IBMResearch, Baidu, Facebook, Stanford University, Univer-sity of Michigan, MIT, University of Washington, andnumerous other places. These researchers have demon-strated successes of deep learning in diverse applicationsof computer vision, phonetic recognition, voice search,conversational speech recognition, speech and image fea-ture coding, semantic utterance classification, hand-writingrecognition, audio processing, visual object recognition,information retrieval, and even in the analysis of moleculesthat may lead to discovering new drugs as reportedrecently in [9].This paper expands my recent overview material on the

    same topic as presented in the plenary overview session ofAPSIPA-ASC2011 as well as the tutorial material presentedin the same conference [1]. It is aimed to introduce theAPSIPA Transactions’ readers to the emerging technologiesenabled by deep learning. I attempt to provide a tutorialreview on the research work conducted in this exciting areasince the birth of deep learning in 2006 that has direct rele-vance to signal and information processing. Future researchdirections will be discussed to attract interests from moreAPSIPA researchers, students, and practitioners for advanc-ing signal and information-processing technology as thecore mission of the APSIPA community.The remainder of this paper is organized as follows:

    • Section II: A brief historical account of deep learning isprovided from the perspective of signal and informationprocessing.

    • Sections III: A three-way classification scheme for alarge body of the work in deep learning is developed.A growing number of deep architectures are classifiedinto: (1) generative, (2) discriminative, and (3) hybrid cat-egories, and high-level descriptions are provided for eachcategory.

    • Sections IV–VI: For each of the three categories, a tuto-rial example is chosen to provide more detailed treat-ment. The examples chosen are: (1) deep autoencodersfor the generative category (Section IV); (2) DNNs pre-trained with DBN for the hybrid category (Section V);and (3) deep stacking networks (DSNs) and a related spe-cial version of recurrent neural networks (RNNs) for thediscriminative category (Section VI).

    • Sections VII: A set of typical and successful applicationsof deep learning in diverse areas of signal and informationprocessing are reviewed.

    • Section VIII: A summary and future directions are given.

    I I . A BR IEF H ISTOR ICAL ACCOUNTOF DEEP LEARN ING

    Until recently,mostmachine learning and signal-processingtechniques had exploited shallow-structured architectures.These architectures typically contain a single layer of non-linear feature transformations and they lack multiple layersof adaptive non-linear features. Examples of the shal-low architectures are conventional, commonly used Gaus-sian mixture models (GMMs) and hidden Markov models(HMMs), linear or non-linear dynamical systems, condi-tional random fields (CRFs), maximum entropy (MaxEnt)models, support vector machines (SVMs), logistic regres-sion, kernel regression, and multi-layer perceptron (MLP)neural networkwith a single hidden layer including extremelearning machine. A property common to these shallowlearning models is the relatively simple architecture thatconsists of only one layer responsible for transforming theraw input signals or features into a problem-specific featurespace, which may be unobservable. Take the example of anSVM and other conventional kernel methods. They use ashallow linear pattern separation model with one or zerofeature transformation layer when kernel trick is used orotherwise. (Notable exceptions are the recent kernel meth-ods that have been inspired by and integrated with deeplearning; e.g., [10–12].) Shallow architectures have beenshown effective in solving many simple or well-constrainedproblems, but their limited modeling and representationalpower can cause difficulties when dealing with more com-plicated real-world applications involving natural signalssuch as human speech, natural sound and language, andnatural image and visual scenes.Human information-processing mechanisms (e.g.,

    vision and speech), however, suggest the need of deeparchitectures for extracting complex structure and build-ing internal representation from rich sensory inputs. Forexample, human speech production and perception sys-tems are both equipped with clearly layered hierarchicalstructures in transforming the information from the wave-form level to the linguistic level [13–16]. In a similar vein,human visual system is also hierarchical in nature, most inthe perception side but interestingly also in the “generative”side [17–19]. It is natural to believe that the state-of-the-art can be advanced in processing these types of naturalsignals if efficient and effective deep learning algorithmsare developed. Information-processing and learning sys-tems with deep architectures are composed of many layersof non-linear processing stages, where each lower layer’soutputs are fed to its immediate higher layer as the input.The successful deep learning techniques developed so farshare two additional key properties: the generative natureof the model, which typically requires adding an additional

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 3

    top layer to perform discriminative tasks, and an unsuper-vised pretraining step that makes an effective use of largeamounts of unlabeled training data for extracting structuresand regularities in the input features.Historically, the concept of deep learning was originated

    from artificial neural network research. (Hence, one mayoccasionally hear the discussion of “new-generation neu-ral networks”.) Feed-forward neural networks orMLPs withmany hidden layers are indeed a good example of the mod-els with a deep architecture. Backpropagation, popularizedin 1980s, has been a well-known algorithm for learning theweights of these networks. Unfortunately backpropagationalone did not work well in practice for learning networkswith more than a small number of hidden layers (see areview and analysis in [4, 20]). The pervasive presence oflocal optima in the non-convex objective function of thedeep networks is themain source of difficulties in the learn-ing. Backpropagation is based on local gradient descent,and starts usually at some random initial points. It oftengets trapped in poor local optima, and the severity increasessignificantly as the depth of the networks increases. This dif-ficulty is partially responsible for steering away most of themachine learning and signal-processing research from neu-ral networks to shallow models that have convex loss func-tions (e.g., SVMs, CRFs, and MaxEnt models), for whichglobal optimum can be efficiently obtained at the cost of lesspowerful models.The optimization difficulty associated with the deep

    models was empirically alleviated when a reasonably effi-cient, unsupervised learning algorithm was introduced inthe two papers of [3, 21]. In these papers, a class of deepgenerative models was introduced, called deep belief net-work (DBN), which is composed of a stack of restrictedBoltzmann machines (RBMs). A core component of theDBN is a greedy, layer-by-layer learning algorithm, whichoptimizes DBN weights at time complexity linear to thesize and depth of the networks. Separately and with somesurprise, initializing the weights of an MLP with a corre-spondingly configured DBN often produces much betterresults than that with the random weights. As such, MLPswith many hidden layers, or deep neural networks (DNNs),which are learned with unsupervised DBN pretraining fol-lowed by backpropagation fine-tuning is sometimes alsocalled DBNs in the literature (e.g., [22–24]). More recently,researchers have been more careful in distinguishing DNNfrom DBN [6, 25], and when DBN is used the initial-ize the training of a DNN, the resulting network is calledDBN–DNN [6].In addition to the supply of good initialization points,

    DBN comes with additional attractive features. First, thelearning algorithm makes effective use of unlabeled data.Second, it can be interpreted as Bayesian probabilistic gen-erative model. Third, the values of the hidden variables inthe deepest layer are efficient to compute. And fourth, theoverfitting problem, which is often observed in the modelswith millions of parameters such as DBNs, and the under-fitting problem, which occurs often in deep networks, canbe effectively addressed by the generative pretraining step.

    An insightful analysis on what speech information DBNscan capture is provided in [26].The DBN-training procedure is not the only one that

    makes effective training of DNNs possible. Since the pub-lication of the seminal work [3, 21], a number of otherresearchers have been improving and applying the deeplearning techniques with success. For example, one canalternatively pretrain DNNs layer by layer by consideringeach pair of layers as a denoising autoencoder regularizedby setting a subset of the inputs to zero [4, 27]. Also, “con-tractive” autoencoders can be used for the same purpose byregularizing via penalizing the gradient of the activities ofthe hidden units with respect to the inputs [28]. Further,Ranzato et al. [29] developed the sparse encoding symmet-ric machine (SESM), which has a very similar architectureto RBMs as building blocks of a DBN. In principle, SESMmay also be used to effectively initialize the DNN training.Historically, the use of the generative model of DBN to

    facilitate the training of DNNs plays an important role inigniting the interest of deep learning for speech feature cod-ing and for speech recognition [6, 22, 25, 30]. After thiseffectiveness was demonstrated, further research showedmany alternative but simpler ways of doing pretraining.With a large amount of training data, we now know howto learn a DNN by starting with a shallow neural network(i.e., with one hidden layer). After this shallow network hasbeen trained discriminatively, a new hidden layer is insertedbetween the previous hidden layer and the softmax outputlayer and the full network is again discriminatively trained.One can continue this process until the desired numberof hidden layers is reached in the DNN. And finally, fullbackpropagation fine-tuning is carried out to complete theDNN training.Withmore training data andwithmore care-ful weight initialization, the above process of discriminativepretraining can be removed also for effective DNN training.In the next section, an overview is provided on the var-

    ious architectures of deep learning, including and beyondthe original DBN published in [3].

    I I I . THREE BROAD CLASSES OFDEEP ARCH ITECTURES : ANOVERV IEW

    As described earlier, deep learning refers to a ratherwide class of machine learning techniques and architec-tures, with the hallmark of using many layers of non-linear information-processing stages that are hierarchical innature. Depending on how the architectures and techniquesare intended for use, e.g., synthesis/generation or recogni-tion/classification, one can broadly categorize most of thework in this area into three main classes:

    1) Generative deep architectures, which are intended tocharacterize the high-order correlation properties of theobserved or visible data for pattern analysis or synthesispurposes, and/or characterize the joint statistical distri-butions of the visible data and their associated classes. In

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 4 li deng

    the latter case, the use of Bayes rule can turn this type ofarchitecture into a discriminative one.

    2) Discriminative deep architectures, which are intendedto directly provide discriminative power for pattern clas-sification, often by characterizing the posterior distribu-tions of classes conditioned on the visible data; and

    3) Hybrid deep architectures, where the goal is discrimi-nation but is assisted (often in a significant way) withthe outcomes of generative architectures via better opti-mization or/and regularization, or when discriminativecriteria are used to learn the parameters in any of thedeep generative models in category (1) above.

    Note the use of “hybrid” in (3) above is different from thatused sometimes in the literature, which refers to the hybridpipeline systems for speech recognition feeding the outputprobabilities of a neural network into an HMM [31–33].By machine learning tradition (e.g., [34]), it may be nat-

    ural to use a two-way classification scheme according todiscriminative learning (e.g., neural networks) versus deepprobabilistic generative learning (e.g., DBN, DBM, etc.).This classification scheme, however, misses a key insightgained in deep learning research about how generativemodels can greatly improve learning DNNs and other deepdiscriminative models via better optimization and regular-ization. Also, deep generative models may not necessarilyneed to be probabilistic; e.g., the deep autoencoder. Nev-ertheless, the two-way classification points to importantdifferences between DNNs and deep probabilistic models.The former is usually more efficient for training and test-ing, more flexible in its construction, less constrained (e.g.,no normalization by the difficult partition function, whichcan be replaced by sparsity), and is more suitable for end-to-end learning of complex systems (e.g., no approximateinference and learning). The latter, on the other hand, iseasier to interpret and to embed domain knowledge, is eas-ier to compose and to handle uncertainty, but is typicallyintractable in inference and learning for complex systems.This distinction, however, is retained also in the proposedthree-way classification, which is adopted throughout thispaper.Below we briefly review representative work in each of

    the above three classes, where several basic definitions willbe used as summarized inTable 1. Applications of these deeparchitectures are deferred to Section VII.

    A) Generative architecturesAssociated with this generative category, we often see“unsupervised feature learning”, since the labels for the dataare not of concern. When applying generative architecturesto pattern recognition (i.e., supervised learning), a key con-cept here is (unsupervised) pretraining. This concept arisesfrom the need to learn deep networks but learning the lowerlevels of such networks is difficult, especially when trainingdata are limited. Therefore, it is desirable to learn each lowerlayer without relying on all the layers above and to learn alllayers in a greedy, layer-by-layer manner from bottom up.

    This is the gist of “pretraining” before subsequent learningof all layers together.Among the various subclasses of generative deep archi-

    tecture, the energy-based deep models including autoen-coders are the most common (e.g., [4, 35–38]). The originalform of the deep autoencoder [21, 30], which we will givemore detail about in Section IV, is a typical example inthe generative model category. Most other forms of deepautoencoders are also generative in nature, but with quitedifferent properties and implementations. Examples aretransforming autoencoders [39], predictive sparse codersand their stacked version, and denoising autoencoders andtheir stacked versions [27].Specifically, in denoising autoencoders, the input vec-

    tors are first corrupted; e.g., randomizing a percentage ofthe inputs and setting them to zeros. Then one designs thehidden encoding nodes to reconstruct the original, uncor-rupted input data using criteria such asKL distance betweenthe original inputs and the reconstructed inputs. Uncor-rupted encoded representations are used as the inputs to thenext level of the stacked denoising autoencoder.Another prominent type of generative model is deep

    Boltzmann machine or DBM [40–42]. A DBM containsmany layers of hidden variables, and has no connectionsbetween the variables within the same layer. This is a spe-cial case of the general Boltzmann machine (BM), whichis a network of symmetrically connected units that makestochastic decisions about whether to be on or off. Whilehaving very simple learning algorithm, the general BMs arevery complex to study and very slow to compute in learning.In a DBM, each layer captures complicated, higher-ordercorrelations between the activities of hidden features in thelayer below. DBMs have the potential of learning internalrepresentations that become increasingly complex, highlydesirable for solving object and speech recognition prob-lems. Furthermore, the high-level representations can bebuilt from a large supply of unlabeled sensory inputs andvery limited labeled data can then be used to only slightlyfine-tune the model for a specific task at hand.When the number of hidden layers of DBM is reduced

    to one, we have RBM. Like DBM, there are no hidden-to-hidden and no visible-to-visible connections. The mainvirtue of RBM is that via composing many RBMs, manyhidden layers can be learned efficiently using the featureactivations of one RBM as the training data for the next.Such composition leads to DBN, which we will describe inmore detail, together with RBMs, in Section V.The standard DBN has been extended to the factored

    higher-order BM in its bottom layer, with strong resultsfor phone recognition obtained [43]. This model, calledmean-covariance RBM or mcRBM, recognizes the limita-tion of the standard RBM in its ability to represent thecovariance structure of the data. However, it is very diffi-cult to train mcRBM and to use it at the higher levels ofthe deep architecture. Furthermore, the strong results pub-lished are not easy to reproduce. In the architecture of [43],the mcRBM parameters in the full DBN are not easy to befine-tuned using the discriminative information as for the

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 5

    Table 1. Some basic deep learning terminologies.

    1. Deep Learning: A class of machine learning techniques, where many layers of information-processing stages in hierarchical architectures are exploited for unsupervised feature learning andfor pattern analysis/classification. The essence of deep learning is to compute hierarchical featuresor representations of the observational data, where the higher-level features or factors are definedfrom lower-level ones.

    2. Deep belief network (DBN): probabilistic generative models composed of multiple layers ofstochastic, hidden variables. The top two layers have undirected, symmetric connections betweenthem. The lower layers receive top-down, directed connections from the layer above.

    3. Boltzmann machine (BM): A network of symmetrically connected, neuron-like units that makestochastic decisions about whether to be on or off.

    4. Restricted Boltzmann machine (RBM): A special BM consisting of a layer of visible units and alayer of hidden units with no visible-visible or hidden-hidden connections.

    5. Deep Boltzmann machine (DBM): A special BM where the hidden units are organized in a deeplayered manner, only adjacent layers are connected, and there are no visible–visible or hidden–hidden connections within the same layer.

    6. Deep neural network (DNN): a multilayer network with many hidden layers, whose weights arefully connected and are often initialized (pretrained) using stacked RBMs orDBN. (In the literature,DBN is sometimes used to mean DNN)

    7. Deep auto-encoder:ADNNwhose output target is the data input itself, often pretrained withDBNor using distorted training data to regularize the learning.

    8. Distributed representation:A representation of the observed data in such away that they aremod-eled as being generated by the interactions of many hidden factors. A particular factor learned fromconfigurations of other factors can often generalize well. Distributed representations form the basisof deep learning.

    regular RBMs in the higher layers. However, recent workshowed that when better features are used, e.g., cepstralspeech features subject to linear discriminant analysis or tofMLLR transformation, then the mcRBM is not needed ascovariance in the transformed data is already modeled [26].Another representative deep generative architecture is

    the sum-product network or SPN [44, 45]. An SPN is adirected acyclic graph with the data as leaves, and withsum and product operations as internal nodes in the deeparchitecture. The “sum” nodes give mixture models, andthe “product” nodes build up the feature hierarchy. Prop-erties of “completeness” and “consistency” constrain theSPN in a desirable way. The learning of SPN is carriedout using the EM algorithm together with backpropaga-tion. The learning procedure starts with a dense SPN.It then finds an SPN structure by learning its weights,where zero weights remove the connections. The main dif-ficulty in learning is found to be the common one – thelearning signal (i.e., the gradient) quickly dilutes when itpropagates to deep layers. Empirical solutions have beenfound to mitigate this difficulty reported in [44], where itwas pointed out that despite the many desirable genera-tive properties in the SPN, it is difficult to fine tune itsweights using the discriminative information, limiting itseffectiveness in classification tasks. This difficulty has beenovercome in the subsequent work reported in [45], wherean efficient backpropagation-style discriminative trainingalgorithm for SPN was presented. It was pointed out thatthe standard gradient descent, computed by the derivativeof the conditional likelihood, suffers from the same gra-dient diffusion problem well known for the regular deepnetworks. But whenmarginal inference is replaced by infer-ring the most probable state of the hidden variables, sucha “hard” gradient descent can reliably estimate deep SPNs’

    weights. Excellent results on (small-scale) image recogni-tion tasks are reported.RNNs can be regarded as a class of deep generative archi-

    tectures when they are used to model and generate sequen-tial data (e.g., [46]). The “depth” of anRNNcan be as large asthe length of the input data sequence. RNNs are very pow-erful for modeling sequence data (e.g., speech or text), butuntil recently they had not been widely used partly becausethey are extremely difficult to train properly due to the well-known “vanishing gradient” problem. Recent advances inHessian-free optimization [47] have partially overcome thisdifficulty using second-order information or stochastic cur-vature estimates. In the recent work of [48], RNNs that aretrainedwithHessian-free optimization are used as a genera-tive deep architecture in the character-level language mod-eling (LM) tasks, where gated connections are introducedto allow the current input characters to predict the transi-tion from one latent state vector to the next. Such generativeRNN models are demonstrated to be well capable of gener-ating sequential text characters. More recently, Bengio et al.[49] and Sutskever [50] have explored new optimizationmethods in training generative RNNs that modify stochas-tic gradient descent and show these modifications can out-perform Hessian-free optimization methods. Mikolov et al.[51] have reported excellent results on using RNNs for LM.More recently, Mesnil et al. [52] reported the success ofRNNs in spoken language understanding.As examples of a different type of generative deep mod-

    els, there has been a long history in speech recognitionresearch where human speech production mechanisms areexploited to construct dynamic and deep structure in prob-abilistic generative models; for a comprehensive review, seebook [53]. Specifically, the early work described in [54–59]generalized and extended the conventional shallow and

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 6 li deng

    conditionally independent HMM structure by imposingdynamic constraints, in the form of polynomial trajectory,on the HMM parameters. A variant of this approach hasbeenmore recently developed using different learning tech-niques for time-varying HMM parameters and with theapplications extended to speech recognition robustness [60,61]. Similar trajectory HMMs also form the basis for para-metric speech synthesis [62–66]. Subsequent work added anew hidden layer into the dynamic model so as to explic-itly account for the target-directed, articulatory-like prop-erties in human speech generation [15, 16, 67–73]. Moreefficient implementation of this deep architecture with hid-den dynamics is achieved with non-recursive or FIR filtersin more recent studies [74–76]. The above deep-structuredgenerative models of speech can be shown as special casesof the more general dynamic Bayesian network model andeven more general dynamic graphical models [77, 78]. Thegraphical models can comprise many hidden layers to char-acterize the complex relationship between the variables inspeech generation. Armed with powerful graphical model-ing tool, the deep architecture of speech has more recentlybeen successfully applied to solve the very difficult problemof single-channel, multi-talker speech recognition, wherethe mixed speech is the visible variable while the un-mixed speech becomes represented in a new hidden layerin the deep generative architecture [79, 80]. Deep genera-tive graphical models are indeed a powerful tool in manyapplications due to their capability of embedding domainknowledge. However, in addition to the weakness of usingnon-distributed representations for the classification cate-gories, they also are often implemented with inappropri-ate approximations in inference, learning, prediction, andtopology design, all arising from inherent intractability inthese tasks for most real-world applications. This problemhas been partly addressed in the recent work of [81], whichprovides an interesting direction for making deep genera-tive graphical models potentially more useful in practice inthe future.The standard statistical methods used for large-scale

    speech recognition and understanding combine (shallow)HMMs for speech acoustics with higher layers of structurerepresenting different levels of natural language hierarchy.This combined hierarchical model can be suitably regardedas a deep generative architecture, whose motivation andsome technical detail may be found in Chapter 7 in therecent book [82] on “Hierarchical HMM” or HHMM.Relatedmodels with greater technical depth andmathemat-ical treatment can be found in [83] for HHMM and [84] forLayered HMM. These early deep models were formulatedas directed graphical models, missing the key aspect of “dis-tributed representation” embodied in the more recent deepgenerative architectures of DBN andDBMdiscussed earlierin this section.Finally, temporally recursive and deep generative mod-

    els can be found in [85] for human motion modeling, andin [86] for natural language and natural scene parsing. Thelatter model is particularly interesting because the learn-ing algorithms are capable of automatically determining the

    optimal model structure. This contrasts with other deeparchitectures such as the DBN where only the parametersare learned while the architectures need to be predefined.Specifically, as reported in [86], the recursive structure com-monly found in natural scene images and in natural lan-guage sentences can be discovered using a max-marginstructure prediction architecture. Not only the units con-tained in the images or sentences are identified but so is theway in which these units interact with each other to formthe whole.

    B) Discriminative architecturesMany of the discriminative techniques in signal and infor-mation processing apply to shallow architectures such asHMMs (e.g., [87–94]) or CRFs (e.g., [95–100]). Since aCRF is defined with the conditional probability on inputdata as well as on the output labels, it is intrinsicallya shallow discriminative architecture. (Interesting equiva-lence between CRF and discriminatively trained Gaussianmodels and HMMs can be found in [101]. More recently,deep-structured CRFs have been developed by stackingthe output in each lower layer of the CRF, together withthe original input data, onto its higher layer [96]. Vari-ous versions of deep-structured CRFs are usefully appliedto phone recognition [102], spoken language identifica-tion [103], and natural language processing [96]. However,at least for the phone recognition task, the performanceof deep-structured CRFs, which is purely discriminative(non-generative), has not been able to match that of thehybrid approach involving DBN, which we will take onshortly.The recent article of [33] gives an excellent review on

    othermajor existing discriminativemodels in speech recog-nition based mainly on the traditional neural network orMLP architecture using backpropagation learning with ran-dom initialization. It argues for the importance of both theincreased width of each layer of the neural networks andthe increased depth. In particular, a class of DNN modelsforms the basis of the popular “tandem” approach, wherea discriminatively learned neural network is developed inthe context of computing discriminant emission probabil-ities for HMMs. For some representative recent works inthis area, see [104, 105]. The tandem approach generatesdiscriminative features for an HMM by using the activi-ties from one or more hidden layers of a neural networkwith various ways of information combination, which canbe regarded as a form of discriminative deep architectures[33, 106].In themost recentwork of [108–110], a newdeep learning

    architecture, sometimes calledDSN, togetherwith its tensorvariant [111, 112] and its kernel version [11], are developedthat all focus on discrimination with scalable, parallelizablelearning relying on little or no generative component. Wewill describe this type of discriminative deep architecturein detail in Section V.RNNs have been successfully used as a generative model

    when the “output” is taken to be the predicted input data in

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 7

    the future, as discussed in the preceding subsection; see alsothe neural predictivemodel [113] with the samemechanism.They can also be used as a discriminative model where theoutput is an explicit label sequence associatedwith the inputdata sequence. Note that such discriminative RNNs wereapplied to speech a long time ago with limited success (e.g.,[114]). For training RNNs for discrimination, presegmentedtraining data are typically required. Also, post-processing isneeded to transform their outputs into label sequences. Itis highly desirable to remove such requirements, especiallythe costly presegmentation of training data.Often a separateHMM is used to automatically segment the sequence dur-ing training, and to transform the RNN classification resultsinto label sequences [114]. However, the use of HMM forthese purposes does not take advantage of the full potentialof RNNs.An interesting method was proposed in [115–117] that

    enables the RNNs themselves to perform sequence classi-fication, removing the need for presegmenting the trainingdata and for post-processing the outputs. Underlying thismethod is the idea of interpreting RNN outputs as theconditional distributions over all possible label sequencesgiven the input sequences. Then, a differentiable objec-tive function can be derived to optimize these conditionaldistributions over the correct label sequences, where nosegmentation of data is required.Another type of discriminative deep architecture is con-

    volutional neural network (CNN), with each module con-sisting of a convolutional layer and a pooling layer. Thesemodules are often stacked up with one on top of another,or with a DNN on top of it, to form a deep model. Theconvolutional layer shares many weights, and the poolinglayer subsamples the output of the convolutional layer andreduces the data rate from the layer below. The weightsharing in the convolutional layer, together with appropri-ately chosen pooling schemes, endows the CNN with some“invariance” properties (e.g., translation invariance). It hasbeen argued that such limited “invariance” or equi-varianceis not adequate for complex pattern recognition tasks andmore principled ways of handling a wider range of invari-ance are needed [39].Nevertheless, theCNNhas been foundhighly effective and been commonly used in computervision and image recognition [118–121, 154]. More recently,with appropriate changes from the CNNdesigned for imageanalysis to that taking into account speech-specific proper-ties, the CNN is also found effective for speech recognition[122–126]. We will discuss such applications in more detailin Section VII.It is useful to point out that time-delay neural networks

    (TDNN, [127, 129]) developed for early speech recognitionare a special case of the CNNwhenweight sharing is limitedto one of the twodimensions, i.e., time dimension. It was notuntil recently that researchers have discovered that time isthe wrong dimension to impose “invariance” and frequencydimension is more effective in sharing weights and pool-ing outputs [122, 123, 126]. An analysis on the underlyingreasons are provided in [126], together with a new strat-egy for designing the CNN’s pooling layer demonstrated to

    be more effective than nearly all previous CNNs in phonerecognition.It is also useful to point out that the model of hierar-

    chical temporal memory (HTM, [17, 128, 130] is anothervariant and extension of the CNN. The extension includesthe following aspects: (1) Time or temporal dimension isintroduced to serve as the “supervision” information for dis-crimination (even for static images); (2) both bottom-upand top-down information flow are used, instead of justbottom-up in the CNN; and (3) a Bayesian probabilisticformalism is used for fusing information and for decisionmaking.Finally, the learning architecture developed for bottom-

    up, detection-based speech recognition proposed in [131]and developed further since 2004, notably in [132–134]using the DBN–DNN technique, can also be categorizedin the discriminative deep architecture category. There isno intent and mechanism in this architecture to character-ize the joint probability of data and recognition targets ofspeech attributes and of the higher-level phone and words.The most current implementation of this approach is basedon multiple layers of neural networks using backpropaga-tion learning [135]. One intermediate neural network layerin the implementation of this detection-based frameworkexplicitly represents the speech attributes, which are sim-plified entities from the “atomic” units of speech developedin the early work of [136, 137]. The simplification lies inthe removal of the temporally overlapping properties ofthe speech attributes or articulatory-like features. Embed-ding such more realistic properties in the future work isexpected to improve the accuracy of speech recognitionfurther.

    C) Hybrid generative–discriminativearchitecturesThe term “hybrid” for this third category refers to thedeep architecture that either comprises or makes use ofboth generative and discriminative model components. Inmany existing hybrid architectures published in the liter-ature (e.g., [21, 23, 25, 138]), the generative component isexploited to help with discrimination, which is the final goalof the hybrid architecture. How and why generative model-ing can help with discrimination can be examined from twoviewpoints:

    1) The optimization viewpoint where generative modelscan provide excellent initialization points in highly non-linear parameter estimation problems (the commonlyused term of “pretraining” in deep learning has beenintroduced for this reason); and/or

    2) The regularization perspective where generative mod-els can effectively control the complexity of the overallmodel.

    The study reported in [139] provided an insightful analysisand experimental evidence supporting both of the view-points above.

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 8 li deng

    When the generative deep architecture of DBNdiscussedin Section III-A is subject to further discriminative trainingusing backprop, commonly called “fine-tuning” in the lit-erature, we obtain an equivalent architecture of the DNN.The weights of the DNN can be “pretrained” from stackedRBMs or DBN instead of the usual random initializa-tion. See [24] for a detailed explanation of the equivalencerelationship and the use of the often confusing terminol-ogy. We will review details of the DNN in the context ofRBM/DBN pretraining as well as its interface with the mostcommonly used shallow generative architecture of HMM(DNN–HMM) in Section IV.Another example of the hybrid deep architecture is

    developed in [23], where again the generative DBN isused to initialize the DNN weights but the fine tuningis carried out not using frame-level discriminative infor-mation (e.g., cross-entropy error criterion) but sequence-level one. This is a combination of the static DNN withthe shallow discriminative architecture of CRF. Here, theoverall architecture of DNN–CRF is learned using thediscriminative criterion of the conditional probability offull label sequences given the input sequence data. Itcan be shown that such DNN–CRF is equivalent to ahybrid deep architecture of DNN and HMMwhose param-eters are learned jointly using the full-sequence max-imum mutual information (MMI) between the entirelabel sequence and the input vector sequence. A closelyrelated full-sequence training method is carried out withsuccess for a shallow neural network [140] and for adeep one [141].Here, it is useful to point out a connection between the

    above hybrid discriminative training and a highly popu-lar minimum phone error (MPE) training technique forthe HMM [89]. In the iterative MPE training procedureusing extended Baum–Welch, the initial HMM parameterscannot be arbitrary. One commonly used initial param-eter set is that trained generatively using Baum–Welchalgorithm for maximum likelihood. Furthermore, an inter-polation term taking the values of generatively trainedHMM parameters is needed in the extended Baum–Welchupdating formula, which may be analogous to “fine tuning”in the DNN training discussed earlier. Such I-smoothing[89] has a similar spirit to DBN pretraining in the “hybrid”DNN learning.Along the line of using discriminative criteria to train

    parameters in generative models as in the above HMMtraining example, we here briefly discuss the same methodapplied to learning other generative architectures. In [142],the generative model of RBM is learned using the discrimi-native criterion of posterior class/label probabilities whenthe label vector is concatenated with the input data vec-tor to form the overall visible layer in the RBM. In thisway, RBM can be considered as a stand-alone solution toclassification problems and the authors derived a discrimi-native learning algorithm for RBM as a shallow generativemodel. In the more recent work of [146], the deep gen-erative model of DBN with the gated MRF at the lowest

    level is learned for feature extraction and then for recog-nition of difficult image classes including occlusions. Thegenerative ability of the DBN model facilitates the discov-ery of what information is captured and what is lost at eachlevel of representation in the deep model, as demonstratedin [146]. A related work on using the discriminative crite-rion of empirical risk to train deep graphical models can befound in [81].A further example of the hybrid deep architecture

    is the use of the generative model of DBN to pre-train deep convolutional neural networks (deep DNN)[123, 144, 145]). Like the fully-connected DNN dis-cussed earlier, the DBN pretraining is also shown toimprove discrimination of the deep CNN over randominitialization.The final example given here of the hybrid deep archi-

    tecture is based on the idea and work of [147, 148], whereone task of discrimination (speech recognition) producesthe output (text) that serves as the input to the second task ofdiscrimination (machine translation). The overall system,giving the functionality of speech translation – translatingspeech in one language into text in another language – isa two-stage deep architecture consisting of both generativeand discriminative elements. Both models of speech recog-nition (e.g., HMM) and ofmachine translation (e.g., phrasalmapping and non-monotonic alignment) are generative innature. But their parameters are all learned for discrimina-tion. The framework described in [148] enables end-to-endperformance optimization in the overall deep architectureusing the unified learning framework initially published in[90]. This hybrid deep learning approach can be appliedto not only speech translation but also all speech-centricand possibly other information-processing tasks such asspeech information retrieval, speech understanding, cross-lingual speech/text understanding and retrieval, etc. (e.g.,[11, 109, 149–153]).After briefly surveying awide range ofwork in each of the

    three classes of deep architectures above, in the followingthree sections, I will elaborate on three prominent mod-els of deep learning, one from each of the three classes.While ideally they should represent the most influentialarchitectures giving state of the art performance, I havechosen the three that I am most familiar with as beingresponsible for their developments and that may serve thetutorial purpose well with the simplicity of the architec-tural and mathematical descriptions. The three architec-tures described in the following three sections may not beinterpreted as the most representative and influential workin each of the three classes. For example, in the categoryof generative architectures, the highly complex deep archi-tecture and generative training methods developed by anddescribed in [154], which is beyond the scope of this tuto-rial, performs quite well in image recognition. Likewise, inthe category of discriminative architectures, the even morecomplex architecture and learning described in Kingsburyet al. [141], Seide et al. [155], and Yan et al. [156] gave the stateof the art performance in large-scale speech recognition.

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 9

    I V . GENERAT IVE ARCH ITECTURE :DEEP AUTOENCODER

    A) IntroductionDeep autoencoder is a special type of DNN whose outputis the data input itself, and is used for learning efficientencoding or dimensionality reduction for a set of data.Morespecifically, it is a non-linear feature extraction methodinvolving no class labels; hence generative. An autoencoderuses three or more layers in the neural network:

    • An input layer of data to be efficiently coded (e.g., pixelsin image or spectra in speech);

    • One or more considerably smaller hidden layers, whichwill form the encoding.

    • Anoutput layer, where each neuron has the samemeaningas in the input layer.

    When the number of hidden layers is greater than one, theautoencoder is considered to be deep.An autoencoder is often trained using one of the many

    backpropagation variants (e.g., conjugate gradient method,steepest descent, etc.) Though often reasonably effective,there are fundamental problems with using backpropa-gation to train networks with many hidden layers. Oncethe errors get backpropagated to the first few layers, theybecome minuscule, and quite ineffective. This causes thenetwork to almost always learn to reconstruct the averageof all the training data. Though more advanced backprop-agation methods (e.g., the conjugate gradient method) helpwith this to some degree, it still results in very slow learningand poor solutions. This problem is remedied by using ini-tial weights that approximate the final solution. The processto find these initial weights is often called pretraining.A successful pretraining technique developed in [3] for

    training deep autoencoders involves treating each neigh-boring set of two layers such as an RBM for pretraining toapproximate a good solution and then using a backpropaga-tion technique to fine-tune so as the minimize the “coding”error. This training technique is applied to construct a deepautoencoder to map images to short binary code for fast,content-based image retrieval. It is also applied to cod-ing documents (called semantic hashing), and to codingspectrogram-like speech features, which we review below.

    B) Use of deep autoencoder to extract speechfeaturesHerewe review themore recent work of [30] in developing asimilar type of autoencoder for extracting bottleneck speechinstead of image features.Discovery of efficient binary codesrelated to such features can also be used in speech infor-mation retrieval. Importantly, the potential benefits of usingdiscrete representations of speech constructed by this typeof deep autoencoder can be derived from an almost unlim-ited supply of unlabeled data in future-generation speechrecognition and retrieval systems.

    Fig. 1. The architecture of the deep autoencoder used in [30] for extracting“bottle-neck” speech features from high-resolution spectrograms.

    A deep generative model of patches of spectrograms thatcontain 256 frequency bins and 1, 3, 9, or 13 frames is illus-trated in Fig. 1. An undirected graphical model called aGaussian-binary RBM is built that has one visible layerof linear variables with Gaussian noise and one hiddenlayer of 500–3000 binary latent variables. After learningthe Gaussian-binary RBM, the activation probabilities of itshidden units are treated as the data for training anotherbinary–binary RBM. These two RBMs can then be com-posed to form a DBN in which it is easy to infer the states ofthe second layer of binary hidden units from the input in asingle forward pass. TheDBNused in thiswork is illustratedon the left side of Fig. 1, where the two RBMs are shown inseparate boxes. (See more detailed discussions on RBM andDBN in the next section.)The deep autoencoder with three hidden layers is formed

    by “unrolling” theDBNusing its weightmatrices. The lowerlayers of this deep autoencoder use the matrices to encodethe input and the upper layers use the matrices in reverseorder to decode the input. This deep autoencoder is thenfine-tuned using backpropagation of error-derivatives tomake its output as similar as possible to its input, as shownon the right side of Fig. 1. After learning is complete, anyvariable-length spectrogram can be encoded and recon-structed as follows. First, N-consecutive overlapping framesof 256-point log power spectra are each normalized to zero-mean and unit-variance to provide the input to the deepautoencoder. The first hidden layer then uses the logisticfunction to compute real-valued activations. These real val-ues are fed to the next, coding layer to compute “codes”. Thereal-valued activations of hidden units in the coding layerare quantized to be either zero or one with 0.5 as the thresh-old. These binary codes are then used to reconstruct theoriginal spectrogram, where individual fixed-frame patches

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 10 li deng

    Fig. 2. Top to Bottom: Original spectrogram; reconstructions using input window sizes of N = 1, 3, 9, and 13 while forcing the coding units to be zero or one (i.e.,a binary code). The y-axis values indicate FFT bin numbers (i.e., 256-point FFT is used for constructing all spectrograms).

    are reconstructed first using the two upper layers of net-work weights. Finally, overlap-and-add technique is usedto reconstruct the full-length speech spectrogram from theoutputs produced by applying the deep autoencoder toevery possible window of N consecutive frames. We showsome illustrative encoding and reconstruction examplesbelow.

    C) Illustrative examplesAt the top of Fig. 2 is the original speech, followed by thereconstructed speech utterances with forced binary values(zero or one) at the 312 unit code layer for encoding windowlengths of N = 1, 3, 9, and 13, respectively. The lower codingerrors for N = 9 and 13 are clearly seen.Encoding accuracy of the deep autoencoder is qualita-

    tively examined to compare with themore traditional codesvia vector quantization (VQ). Figure 3 shows various aspectsof the encoding accuracy. At the top is the original speechutterance’s spectrogram. The next two spectrograms are theblurry reconstruction from the 312-bit VQ and the muchmore faithful reconstruction from the 312-bit deep autoen-coder. Coding errors fromboth coders, plotted as a functionof time, are shown below the spectrograms, demonstrat-ing that the autoencoder (red curve) is producing lowererrors than theVQ coder (blue curve) throughout the entirespan of the utterance. The final two spectrograms showthe detailed coding error distributions over both time andfrequency bins.

    D) Transforming autoencoderThe deep autoencoder described above can extract a com-pact code for a feature vector due to its many layers and thenon-linearity. But the extracted code would change unpre-dictably when the input feature vector is transformed. It isdesirable to be able to have the code change predictably thatreflects the underlying transformation invariant to the per-ceived content. This is the goal of transforming autoencoderproposed in for image recognition [39].The building block of the transforming autoencoder is a

    “capsule”, which is an independent subnetwork that extractsa single parameterized feature representing a single entity,be it visual or audio. A transforming autoencoder receivesboth an input vector and a target output vector, which isrelated to the input vector by a simple global transforma-tion; e.g., the translation of a whole image or frequency shiftdue to vocal tract length differences for speech. An explicitrepresentation of the global transformation is known also.The bottleneck or coding layer of the transforming autoen-coder consists of the outputs of several capsules.During the training phase, the different capsules learn

    to extract different entities in order to minimize the errorbetween the final output and the target.In addition to the deep autoencoder architectures

    described in this section, there are many other types of gen-erative architectures in the literature, all characterized bythe use of data alone (i.e., free of classification labels) toautomatically derive higher-level features. Although suchmore complex architectures have produced state of the

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 11

    Fig. 3. Top to bottom: Original spectrogram from the test set; reconstruction from the 312-bit VQ coder; reconstruction from the 312-bit autoencoder; coding errorsas a function of time for the VQ coder (blue) and autoencoder (red); spectrogram of the VQ coder residual; spectrogram of the deep autoencoder’s residual.

    art results (e.g., [154]), their complexity does not permitdetailed treatment in this tutorial paper; rather, a brief sur-vey of a broader range of the generative deep architectureswas included in Section III-A.

    V . HYBR ID ARCH ITECTURE : DNNPRETRA INED WITH DBN

    A) BasicsIn this section, we present the most widely studied hybriddeep architecture of DNNs, consisting of both pretraining(using generative DBN) and fine-tuning stages in its param-eter learning. Part of this review is based on the recentpublication of [6, 7, 25].As the generative component of the DBN, it is a prob-

    abilistic model composed of multiple layers of stochastic,latent variables. The unobserved variables can have binaryvalues and are often called hidden units or feature detectors.The top two layers have undirected, symmetric connec-tions between them and form an associative memory. Thelower layers receive top-down, directed connections fromthe layer above. The states of the units in the lowest layer, orthe visible units, represent an input data vector.There is an efficient, layer-by-layer procedure for learn-

    ing the top-down, generative weights that determine howthe variables in one layer dependon the variables in the layerabove. After learning, the values of the latent variables inevery layer can be inferred by a single, bottom-up pass that

    starts with an observed data vector in the bottom layer anduses the generative weights in the reverse direction.DBNs are learned one layer at a time by treating the val-

    ues of the latent variables in one layer, when they are beinginferred from data, as the data for training the next layer.This efficient, greedy learning can be followed by, or com-bined with, other learning procedures that fine-tune all ofthe weights to improve the generative or discriminative per-formance of the full network. This latter learning procedureconstitutes the discriminative component of the DBN as thehybrid architecture.Discriminative fine-tuning can be performed by adding

    a final layer of variables that represent the desired out-puts and backpropagating error derivatives.Whennetworkswith many hidden layers are applied to highly structuredinput data, such as speech and images, backpropagationworksmuch better if the feature detectors in the hidden lay-ers are initialized by learning a DBN to model the structurein the input data as originally proposed in [21].A DBN can be viewed as a composition of simple learn-

    ing modules via stacking them. This simple learning mod-ule is called RBMs that we introduce next.

    B) Restricted BMAn RBM is a special type of Markov random field that hasone layer of (typically Bernoulli) stochastic hidden unitsand one layer of (typically Bernoulli or Gaussian) stochas-tic visible or observable units. RBMs can be represented asbipartite graphs, where all visible units are connected to all

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 12 li deng

    hidden units, and there are no visible–visible or hidden–hidden connections.In an RBM, the joint distribution p(v, h; θ) over the visi-

    ble units v and hidden units h, given the model parametersθ , is defined in terms of an energy function E (v, h; θ) of

    p(v, h; θ) = exp(−E (v, h; θ))Z

    ,

    where Z = ∑v∑h exp(−E (v, h; θ)) is a normalizationfactor or partition function, and the marginal probabilitythat the model assigns to a visible vector v is

    p(v; θ) =∑

    h exp(−E (v, h; θ))Z

    .

    For a Bernoulli (visible)–Bernoulli (hidden) RBM, theenergy function is defined as

    E (v, h; θ) = −I∑

    i=1

    J∑j=1

    wi jvi hj −I∑

    i=1bivi −

    J∑j=1

    a j hj ,

    where wi j represents the symmetric interaction termbetween visible unit vi and hidden unit hj , bi and a j thebias terms, and I and J are the numbers of visible and hid-den units. The conditional probabilities can be efficientlycalculated as

    p(hj = 1|v; θ) = σ(

    I∑i=1

    wijvi + a j)

    ,

    p(vi = 1|h; θ) = σ⎛⎝ J∑

    j=1wijhj + bi

    ⎞⎠ ,

    where σ(x) = 1/(1 + exp(x)).Similarly, for a Gaussian (visible)–Bernoulli (hidden)

    RBM, the energy is

    E (v, h; θ) = −I∑

    i=1

    J∑j=1

    wi jvi hj

    − 12

    I∑i=1

    (vi − bi )2 −J∑

    j=1a j hj ,

    The corresponding conditional probabilities become

    p(hj = 1|v; θ) = σ(

    I∑i=1

    wi jvi + a j)

    ,

    p(vi |h; θ) = N⎛⎝ J∑

    j=1wi j hj + bi , 1

    ⎞⎠ ,

    where vi takes real values and follows a Gaussian dis-tribution with mean

    ∑Jj=1 wi j hj + bi and variance one.

    Gaussian–Bernoulli RBMs can be used to convert real-valued stochastic variables to binary stochastic variables,

    Fig. 4. A pictorial view of sampling from a RBM during the “negative” learningphase of the RBM (courtesy of G. Hinton).

    which can then be further processed using the Bernoulli–Bernoulli RBMs.The above discussion used two most common condi-

    tional distributions for the visible data in the RBM – Gaus-sian (for continuous-valued data) and binomial (for binarydata). More general types of distributions in the RBM canalso be used. See [157] for the use of general exponential-family distributions for this purpose.Taking the gradient of the log likelihood log p(v; θ) we

    can derive the update rule for the RBM weights as:

    �wi j = Edata(vi hj ) − Emodel(vi hj ),

    where Edata(vi hj ) is the expectation observed in the train-ing set and Emodel(vi hj ) is that same expectation underthe distribution defined by the model. Unfortunately,Emodel(vi hj ) is intractable to compute so the contrastivedivergence (CD) approximation to the gradient is usedwhere Emodel(vi hj ) is replaced by running the Gibbs sam-pler initialized at the data for one full step. The steps inapproximating Emodel(vi hj ) is as follows:

    • Initialize v0 at data• Sample h0 ∼ p(h|v0)• Sample v1 ∼ p(v|h0)• Sample h1 ∼ p(h|v1)Then (v1, h1) is a sample from the model, as a very roughestimate of Emodel(vi hj ) = (v∞, h∞), which is a true samplefrom the model. Use of (v1, h1) to approximate Emodel(vi hj )gives rise to the algorithm of CD-1. The sampling processcan be pictorially depicted as below in Fig. 4 below.Careful training of RBMs is essential to the success of

    applying RBMand related deep learning techniques to solvepractical problems. See the Technical Report [158] for a veryuseful practical guide for training RBMs.The RBM discussed above is a generative model, which

    characterizes the input data distribution using hidden vari-ables and there is no label information involved. However,when the label information is available, it can be usedtogether with the data to form the joint “data” set. Then thesame CD learning can be applied to optimize the approx-imate “generative” objective function related to data like-lihood. Further, and more interestingly, a “discriminative”objective function can be defined in terms of conditionallikelihood of labels. This discriminative RBM can be usedto “fine tune” RBM for classification tasks [142].Note the SESM architecture by Ranzato et al. [29] sur-

    veyed in Section III is quite similar to the RBM describedabove. While they both have a symmetric encoder and

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 13

    Fig. 5. Illustration of a DBN/DNN architecture.

    decoder, and a logistic non-linearity on the top of theencoder, the main difference is that RBM is trained using(approximate) maximum likelihood, but SESM is trainedby simply minimizing the average energy plus an additionalcode sparsity term. SESM relies on the sparsity term to pre-vent flat energy surfaces, while RBM relies on an explicitcontrastive term in the loss, an approximation of the log par-tition function. Another difference is in the coding strategyin that the code units are “noisy” and binary in RBM, whilethey are quasi-binary and sparse in SESM.

    C) Stacking up RBMs to form a DBN/DNNarchitectureStacking a number of the RBMs learned layer by layer frombottom up gives rise to a DBN, an example of which isshown in Fig. 5. The stacking procedure is as follows. Afterlearning a Gaussian–Bernoulli RBM (for applications withcontinuous features such as speech) or Bernoulli–BernoulliRBM (for applications with nominal or binary features suchas black–white image or coded text), we treat the activa-tion probabilities of its hidden units as the data for trainingthe Bernoulli–Bernoulli RBM one layer up. The activationprobabilities of the second-layer Bernoulli–Bernoulli RBMare then used as the visible data input for the third-layerBernoulli–Bernoulli RBM, and so on. Some theoretical jus-tifications of this efficient layer-by-layer greedy learningstrategy is given in [3], where it is shown that the stackingprocedure above improves a variational lower bound on thelikelihood of the training data under the composite model.That is, the greedy procedure above achieves approximatemaximum-likelihood learning. Note that this learning pro-cedure is unsupervised and requires no class label.

    When applied to classification tasks, the generative pre-training can be followed by or combined with other, typi-cally discriminative, learning procedures that fine-tune allof the weights jointly to improve the performance of thenetwork. This discriminative fine-tuning is performed byadding a final layer of variables that represent the desiredoutputs or labels provided in the training data. Then, thebackpropagation algorithm can be used to adjust or fine-tune the DBN weights and use the final set of weights inthe same way as for the standard feedforward neural net-work.What goes to the top, label layer of this DNNdependson the application. For speech recognition applications, thetop layer, denoted by “l1, l2, . . . l j , . . . , lL ,” in Fig. 5, can rep-resent either syllables, phones, subphones, phone states, orother speech units used in the HMM-based speech recog-nition system.The generative pretraining described above has pro-

    duced excellent phone and speech recognition results on awide variety of tasks, which will be surveyed in Section VII.Further research has also shown the effectiveness of otherpretraining strategies. As an example, greedy layer-by-layertraining may be carried out with an additional discrimi-native term to the generative cost function at each level.And without generative pretraining, purely discriminativetraining ofDNNs from random initial weights using the tra-ditional stochastic gradient decent method has been shownto work very well when the scales of the initial weights areset carefully and the mini-batch sizes, which trade off noisygradients with convergence speed, used in stochastic gradi-ent decent are adapted prudently (e.g., with an increasingsize over training epochs). Also, randomization order increating mini-batches needs to be judiciously determined.Importantly, it was found effective to learn a DNN by start-ing with a shallow neural net with a single hidden layer.Once this has been trained discriminatively (using earlystops to avoid overfitting), a second hidden layer is insertedbetween the first hidden layer and the labeled softmax out-put units and the expanded deeper network is again traineddiscriminatively. This can be continued until the desirednumber of hidden layers is reached, after which a full back-propagation “fine tuning” is applied. This discriminative“pretraining” procedure is found to work well in practice(e.g., [155]).This type of discriminative “pretraining” procedure is

    closely related to the learning algorithm developed for thedeep architectures called deep convex/stacking network, tobe described in Section VI, where interleaving linear andnon-linear layers are used in building up the deep architec-tures in a modular manner, and the original input vectorsare concatenated with the output vectors of each moduleconsisting of a shallow neural net. Discriminative “pretrain-ing” is used for positioning a subset of weights in eachmodule in a reasonable space using parallelizable convexoptimization, followed by a batch-mode “fine tuning” pro-cedure, which is also parallelizable due to the closed-formconstraint between two subsets of weights in each module.Further, purely discriminative training of the full DNN

    from random initial weights is now known to work much

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 14 li deng

    Fig. 6. Interface between DBN–DNN and HMM to form a DNN–HMM. This architecture has been successfully used in speech recognition experimentsreported in [25].

    better than had been thought in early days, provided thatthe scales of the initial weights are set carefully, a largeamount of labeled training data is available, and mini-batchsizes over training epochs are set appropriately. Neverthe-less, generative pretraining still improves test performance,sometimes by a significant amount especially for smalltasks. Layer-by-layer generative pretraining was originallydone using RBMs, but various types of autoencoder withone hidden layer can also be used.

    D) Interfacing DNN with HMMADBN/DNNdiscussed above is a static classifier with inputvectors having a fixed dimensionality. However, many prac-tical pattern recognition and information-processing prob-lems, including speech recognition, machine translation,

    natural language understanding, video processing and bio-information processing, require sequence recognition. Insequence recognition, sometimes called classification withstructured input/output, the dimensionality of both inputsand outputs are variable.The HMM, based on dynamic programming operations,

    is a convenient tool to help port the strength of a static clas-sifier to handle dynamic or sequential patterns. Thus, it isnatural to combine DBN/DNN andHMM to bridge the gapbetween static and sequence pattern recognition. An archi-tecture that shows the interface between a DNN and HMMis provided in Fig. 6. This architecture has been successfullyused in speech recognition experiments as reported in [25].It is important to note that the unique elasticity of tem-

    poral dynamic of speech as elaborated in [53] would requiretemporally-correlated models better than HMM for the

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 15

    ultimate success of speech recognition. Integrating suchdynamic models having realistic co-articulatory propertieswith the DNN and possibly other deep learning models toform the coherent dynamic deep architecture is a challeng-ing new research.

    V I . D ISCR IM INAT IVEARCH ITECTURES : DSN ANDRECURRENT NETWORK

    A) IntroductionWhile the DNN just reviewed has been shown to beextremely powerful in connection with performing recog-nition and classification tasks including speech recognitionand image classification, training a DBN has proven tobe more difficult computationally. In particular, conven-tional techniques for training DNN at the fine tuning phaseinvolve the utilization of a stochastic gradient descent learn-ing algorithm, which is extremely difficult to parallelizeacross-machines. This makes learning at large scale practi-cally impossible. For example, it has been possible to use onesingle, very powerful GPU machine to train DNN-basedspeech recognizers with dozens to a few hundreds of hoursof speech training data with remarkable results. It is verydifficult, however, to scale up this success with thousandsor more hours of training data.Here we describe a new deep learning architecture, DSN,

    which attacks the learning scalability problem. This sectionis based in part on the recent publications of [11, 107, 111, 112]with expanded discussions.The central idea of DSN design relates to the concept of

    stacking, as proposed originally in [159], where simplemod-ules of functions or classifiers are composed first and thenthey are “stacked” on top of each other in order to learncomplex functions or classifiers. Various ways of imple-menting stacking operations have been developed in thepast, typically making use of supervised information in thesimple modules. The new features for the stacked classifierat a higher level of the stacking architecture often come fromconcatenation of the classifier output of a lower module andthe raw input features. In [160], the simple module usedfor stacking was a CRF. This type of deep architecture wasfurther developed with hidden states added for successfulnatural language and speech recognition applications wheresegmentation information in unknown in the training data[96]. Convolutional neural networks, as in [161], can alsobe considered as a stacking architecture but the supervisioninformation is typically not used until in the final stackingmodule.The DSN architecture was originally presented in [107],

    which also used the nameDeepConvexNetwork orDCN toemphasize the convex nature of themain learning algorithmused for learning the network. The DSN discussed in thissection makes use of supervision information for stack-ing each of the basic modules, which takes the simplifiedform of multi-layer perceptron. In the basic module, the

    Fig. 7. A DSN architecture with input–output stacking. Only four modules areillustrated, each with a distinct color. Dashed lines denote copying layers.

    output units are linear and the hidden units are sigmoidalnon-linear. The linearity in the output units permits highlyefficient, parallelizable, and closed-form estimation (a resultof convex optimization) for the output network weightsgiven the hidden units’ activities. Owing to the closed-form constraints between the input and output weights, theinput weights can also be elegantly estimated in an efficient,parallelizable, batch-mode manner.The name “convex” used in [107] accentuates the role

    of convex optimization in learning the output networkweights given the hidden units’ activities in each basicmod-ule. It also points to the importance of the closed-formconstraints, derived from the convexity, between the inputand output weights. Such constraints make the learningthe remaining network parameters (i.e., the input networkweights) much easier than otherwise, enabling batch-modelearning of DSN that can be distributed over CPU clusters.And in more recent publications, DSN was used when thekey operation of stacking is emphasized.

    B) An architectural overview of DSNA DSN, shown in Fig. 7, includes a variable number oflayeredmodules, wherein eachmodule is a specialized neu-ral network consisting of a single hidden layer and twotrainable sets of weights. In Fig. 7, only four such modules

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • 16 li deng

    are illustrated, where each module is shown with a sepa-rate color. (In practice, up to a few hundreds of moduleshave been efficiently trained and used in image and speechclassification experiments.)The lowest module in the DSN comprises a first linear

    layer with a set of linear input units, a non-linear layer witha set of non-linear hidden units, and a second linear layerwith a set of linear output units.The hidden layer of the lowest module of a DSN com-

    prises a set of non-linear units that are mapped to the inputunits by way of a first, lower-layer weight matrix, which wedenote byW. For instance, the weight matrix may comprisea plurality of randomly generated values between zero andone, or the weights of an RBM trained separately. The non-linear units may be sigmoidal units that are configured toperform non-linear operations on weighted outputs fromthe input units (weighted in accordancewith the first weightmatrixW).The second, linear layer in anymodule of a DSN includes

    a set of output units that are representative of the tar-gets of classification. The non-linear units in each moduleof the DSN may be mapped to a set of the linear out-put units by way of a second, upper-layer weight matrix,which we denote by U . This second weight matrix can belearned by way of a batch learning process, such that learn-ing can be undertaken in parallel. Convex optimization canbe employed in connection with learning U . For instance,U can be learned based at least in part upon the first weightmatrix W, values of the coded classification targets, andvalues of the input units.As indicated above, the DSN includes a set of serially

    connected, overlapping, and layeredmodules, wherein eachmodule includes the aforementioned three layers – a firstlinear layer that includes a set of linear input units whosenumber equals the dimensionality of the input features, ahidden layer that comprises a set of non-linear units whosenumber is a tunable hyper-parameter, and a second linearlayer that comprises a plurality of linear output units whosenumber equals that of the target classification classes. Themodules are referred to herein as being layered because theoutput units of a lowermodule are a subset of the input unitsof an adjacent higher module in the DSN.More specifically,in a second module that is directly above the lowest mod-ule in the DSN, the input units can include the output unitsor hidden units of the lower module(s). The input units canadditionally include the raw training data – in other words,the output units of the lowest module can be appended tothe input units in the second module, such that the inputunits of the second module also include the output units ofthe lowest module.The pattern discussed above of including output units in

    a lower module as a portion of the input units in an adja-cent higher module in the DBN and thereafter learning aweight matrix that describes connection weights betweenhidden units and linear output units via convex optimiza-tion can continue for many modules. A resultant learnedDSN may then be deployed in connection with an auto-matic classification task such as frame-level speech phone

    or state classification. Connecting DSNs output to anHMMor any dynamic programming device enables continuousspeech recognition and other forms of sequential patternrecognition.

    C) Learning DSN weightsHere, some technical detail is provided as to how the useof linear output units in DSN facilitates the learning ofthe DSN weights. A single module is used to illustrate theadvantage for simplicity reasons. First, it is clear that theupper layer weight matrix U can be efficiently learned oncethe activity matrix H over all training samples in the hid-den layer is known. Let us denote the training vectors byX = [x1, . . . , xi , . . . , xN ], in which each vector is denotedby xi = [x1i , . . . , x ji , . . . , xDi ]T where D is the dimensionof the input vector, which is a function of the block, andN is the total number of training samples. Denote by L thenumber of hidden units and by C the dimension of the out-put vector. Then, the output of a DSN block is yi = UThi ,where hi = σ(WTxi ) is the hidden-layer vector for samplei , U is an L × C weight matrix at the upper layer of a block.W is aD × Lweightmatrix at the lower layer of a block, andσ(·) is a sigmoid function. Bias terms are implicitly repre-sented in the above formulation if xi and hi are augmentedwith ones.Given target vectors in the full training set with a total

    of N samples, T = [t1, . . . , ti , . . . , tN ], where each vectoris ti = [t1i , . . . , t j i , . . . , tCi ]T , the parameters U andW arelearned so as to minimize the average of the total squareerror below:

    E = 12

    ∑n

    ‖yn − tn‖2 =1

    2Tr[(Y − T)(Y − T)T],

    where the output of the network is

    yn = UThn = UTσ(WTxn) = Gn(U ,W)which depends on both weight matrices, as in the standardneural net. Assuming H = [h1, . . . , hi , . . . , hN ] is known,or equivalently,W is known. Then, setting the error deriva-tive with respective to U to zero gives

    U = (HHT)−1HTT = F(W), where hn = σ(WTxn)This provides an explicit constraint between U , and W,which were treated independently in the popular backpropalgorithm.Now, given the equality constraintU = F (W), let us use

    the Lagrangianmultiplier method to solve the optimizationproblem in learningW. Optimizing the Lagrangian:

    E = 12

    ∑n

    ‖Gn(U ,W) − tn‖2 + λ‖U − F(W)‖,

    we can derive batch-mode gradient descent learningalgorithmwhere the gradient takes the following form [108]:

    ∂E

    ∂W= 2X[HT ◦ (1 − H)T ◦ [H†(HTT )(TH†)

    − TT (TH†)]]

    https://www.cambridge.org/core/terms. https://doi.org/10.1017/atsip.2013.9Downloaded from https://www.cambridge.org/core. IP address: 54.39.106.173, on 06 Jun 2021 at 19:05:31, subject to the Cambridge Core terms of use, available at

    https://www.cambridge.org/core/termshttps://doi.org/10.1017/atsip.2013.9https://www.cambridge.org/core

  • a tutorial survey of architectures, algorithms, and applications for deep learning 17

    Fig. 8. Comparisons of one single module of a DSN (left) and that of a tensorized-DSN (TDSN). Two equivalent forms of a TDSNmodule are shown to the right.

    whereH† = HT (HHT )−1 is pseudo-inverse ofH and sym-bol ◦ denotes component-wise multiplication.Compared with backprop, the above method has less

    noise in gradient computation due to the exploitation of theexplicit constraintU = F (W). As such, it was found exper-imentally that, unlike backprop, batch training is effective,which aids parallel learning of DSN.

    D) Tensorized DSNThe DSN architecture discussed so far has recently beengeneralized to its tensorized version, which we call TDSN[111, 112]. It has the same scalability as DSN in terms of par-allelizability in learning, but it generalizesDSNby providinghigher-order feature interactions missing in DSN.The architecture of TDSN is similar to that of DSN in the

    way that stacking operation is carried out. That is, modulesof the TDSN are stacking up in a similar way to form a deeparchitecture. The differences of TDSN and DSN lie mainlyin how each module is constructed. In DSN, we have oneset of hidden units forming a hidden layer, as denoted at theleft panel of Fig. 8. In contrast, eachmodule of a TDSD con-tains two independent hidden layers, denoted as “Hidden 1”and “Hidden 2” in the middle and right panels of Fig. 8. Asa result of this different, the upper-layer weights, denotedby “U” in Fig. 8, changes from a matrix (a two-dimensionalarray) in DSN to a tensor (a three-dimensional array) inTDSN, shown as a cube labeled by “U” in the middle panel.The tensor U has a three-way connection, one to the

    prediction layer and the remaining to the two separate hid-den layers. An equivalent form of this TDSN module isshown in the right panel of Fig. 8, where the implicit hiddenlayer is formed by expanding the two separate hidden layersinto their outer product. The resulting large vector containsall possible pair-wise products for the two sets of hidden-layer vectors. This turns tensor U into a matrix again whosedimensions are (1) size of the prediction layer; and (2) prod-uct of the two hidden layers’ sizes. Such equivalence enablesthe same convex optimization for learning U developed for