Deep Learning

Deep Learning

[TOC]

Pre:Notation

$e^{(i)}$:Standard basis vector [0,0,0,…,1,0,0,0] with a 1 at position i
A:Matrix
A:tensor (a tensor is some numbers extend in n dimension,which called “n dimension tensor”.For example,scalar is 1 dimension tensor and matrix is 2 dimension tensor)
Alt text
Alt text
Alt text
Alt text
Alt text(某节点的父节点)
Alt text
Alt text
Alt text
Alt text
Alt text

Element-wise (Hadamard) product:矩阵点乘

Chapter 1:Introduction

Alt text

representation learning
factors of variation
MLP:multilayer perceptron
Logistic Regression

Chapter 1:Linear Algebra

Hadamard product:$A\odot B$
norm(范数)

Euclidean norm:欧几里得范数,也就是L2范数
L1范数就是张量各元素绝对值相加。
L2范数就是各元素绝对值平方和再开方
max norm:最大范数.也就是具有最大幅值元素的绝对值
SVD:singular value decomposition:奇异值分解

-------------End of this passage-------------