• CS6890 Deep Learning

  • Linear regression, Perceptrion, Logistic and Softmax Regression
  • Feed-Forward Neural Networks and Backpropagation
  • Unsupervised Feature Learning
    1. Autoencoder
      1. Sparse Autoencoders
      2. Denoising Autoencoders
      3. Contractive Autoencoders
    2. PCA, PCA whitening, and ZCA whitenin
      1. PCA as Autoencoder (http://www.cs.toronto.edu/~urtasun/courses/CSC411/14_pca.pdf)
      2. PCA whitening
        1. The goal of PCA whitening is: Features are not correlated with each other & Features all have same variance. After PCA, all features are independent with each other now. So we only have to normalize each of them.
      3. ZCA: Another method for whiting.
    3. Sparse Coding
      1. Sparse Autoencoder VS. Sparse Coding
      2. Sparse Coding VS. PCA
    4. Independent Component Analysis
      1. What we want is to get a transfer matrix W, x' = Wx, and the covariance for x' is I.
    5. Unsupervised Learning of Word Representations
    6. Canonical Correlation Analysis
    7. Self-taught learning and Deep Learning
      1. ReLU vs. Sigmoid and Tanh
    8. Convolutional Neural Networks

results matching ""

    No results matching ""