摘要:不吹不黑,,絕對史上最全的機(jī)器學(xué)習(xí)學(xué)習(xí)材料,!本文包含了迄今為止大家公認(rèn)的最佳教程內(nèi)容,。它絕不是網(wǎng)上每個(gè)ML相關(guān)教程的詳盡列表,而是經(jīng)過精挑細(xì)選而成的,,畢竟網(wǎng)上的東西并不全是好的,。作者匯總的目標(biāo)是為了補(bǔ)充我即將出版的新書,為它尋找在機(jī)器學(xué)習(xí)和NLP領(lǐng)域中找到的最佳教程,。 通過這些最佳教程的匯總,,我可以快速的找到我想要得到的教程。從而避免了閱讀更廣泛覆蓋范圍的書籍章節(jié)和苦惱的研究論文,,你也許知道,,當(dāng)你的數(shù)學(xué)功底不是很好的時(shí)候這些論文你通常是拿不下的,。為什么不買書呢,?沒有哪一個(gè)作者是一個(gè)全能先生。當(dāng)你嘗試學(xué)習(xí)特定的主題或想要獲得不同的觀點(diǎn)時(shí),,教程可能是非常有幫助的,。 我將這篇文章分為四個(gè)部分:機(jī)器學(xué)習(xí),NLP,,Python和數(shù)學(xué),。我在每個(gè)部分都包含了一些主題,但由于機(jī)器學(xué)習(xí)是一個(gè)非常復(fù)雜的學(xué)科,,我不可能包含所有可能的主題,。 如果有很好的教程你知道我錯(cuò)過了,請告訴我,!我將繼續(xù)完善這個(gè)學(xué)習(xí)教程,。我在挑選這些鏈接的時(shí)候,都試圖保證每個(gè)鏈接應(yīng)該具有與其他鏈接不同的材料或以不同的方式呈現(xiàn)信息(例如,代碼與幻燈片)或從不同的角度,。 機(jī)器學(xué)習(xí) 從機(jī)器學(xué)習(xí)入手 https:///start-here/ 機(jī)器學(xué)習(xí)很有趣,! https:///@ageitgey/machine-learning-is-fun-80ea3ec3c471 機(jī)器學(xué)習(xí)規(guī)則:ML工程的最佳實(shí)踐 http://martin./rules_of_ml/rules_of_ml.pdf 機(jī)器學(xué)習(xí)速成課程:第一部分,第二部分,,第三部分(伯克利機(jī)器學(xué)習(xí)) https://ml./blog/2016/11/06/tutorial-1/ https://ml./blog/2016/12/24/tutorial-2/ https://ml./blog/2017/02/04/tutorial-3/ 機(jī)器學(xué)習(xí)理論及其應(yīng)用簡介:用一個(gè)小例子進(jìn)行視覺教程 https://www./machine-learning/machine-learning-theory-an-introductory-primer 機(jī)器學(xué)習(xí)的簡單指南 https:///blog/a-gentle-guide-to-machine-learning/ 我應(yīng)該使用哪種機(jī)器學(xué)習(xí)算法,? https://blogs./content/subconsciousmusings/2017/04/12/machine-learning-algorithm-use/ 機(jī)器學(xué)習(xí)入門 https://www./content/dam/SAS/en_us/doc/whitepaper1/machine-learning-primer-108796.pdf 初學(xué)者機(jī)器學(xué)習(xí)教程 https://www./kanncaa1/machine-learning-tutorial-for-beginners 激活函數(shù)和Dropout函數(shù) Sigmoid神經(jīng)元 http:///chap1.html 激活函數(shù)在神經(jīng)網(wǎng)絡(luò)中的作用是什么? https://www./What-is-the-role-of-the-activation-function-in-a-neural-network 神經(jīng)網(wǎng)絡(luò)中常見的激活函數(shù)的優(yōu)缺點(diǎn)比較列表 https://stats./questions/115258/comprehensive-list-of-activation-functions-in-neural-networks-with-pros-cons 激活函數(shù)及其類型對比 https:///towards-data-science/activation-functions-and-its-types-which-is-better-a9a5310cc8f 理解對數(shù)損失 http://www./blog/2015/12/making-sense-logarithmic-loss/ 損失函數(shù)(斯坦福CS231n) http://cs231n./neural-networks-2/ L1與L2損失函數(shù) http://rishy./ml/2015/07/28/l1-vs-l2-loss/ 交叉熵成本函數(shù) http:///chap3.html 偏差(bias) 偏差在神經(jīng)網(wǎng)絡(luò)中的作用 https:///questions/2480650/role-of-bias-in-neural-networks/2499936 神經(jīng)網(wǎng)絡(luò)中的偏差節(jié)點(diǎn) http://makeyourownneuralnetwork./2016/06/bias-nodes-in-neural-networks.html 什么是人工神經(jīng)網(wǎng)絡(luò)的偏差,? https://www./What-is-bias-in-artificial-neural-network 感知器 感知器 http:///chap1.html 感知器 http:///book/chapter-10-neural-networks/ 單層神經(jīng)網(wǎng)絡(luò)(感知器) http://computing./~humphrys/Notes/Neural/single.neural.html 從Perceptrons到Deep Networks https://www./machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks 回歸 線性回歸分析介紹 http://people./~rnau/regintro.htm 線性回歸 http://ufldl./tutorial/supervised/LinearRegression/ 線性回歸 http://ml-cheatsheet./en/latest/linear_regression.html Logistic回歸 http://ml-cheatsheet./en/latest/logistic_regression.html 機(jī)器學(xué)習(xí)的簡單線性回歸教程 http:///simple-linear-regression-tutorial-for-machine-learning/ 機(jī)器學(xué)習(xí)的Logistic回歸教程 http:///logistic-regression-tutorial-for-machine-learning/ Softmax回歸 http://ufldl./tutorial/supervised/SoftmaxRegression/ 梯度下降 在梯度下降中學(xué)習(xí) http:///chap1.html 梯度下降 http://iamtrask./2015/07/27/python-network-part2/ 如何理解梯度下降算法 http://www./2017/04/simple-understand-gradient-descent-algorithm.html 梯度下降優(yōu)化算法概述 http:///optimizing-gradient-descent/ 優(yōu)化:隨機(jī)梯度下降(斯坦福CS231n) http://cs231n./optimization-1/ 生成學(xué)習(xí)(GenerativeLearning) 生成學(xué)習(xí)算法(斯坦福CS229) http://cs229./notes/cs229-notes2.pdf 樸素貝葉斯分類器的實(shí)用解釋 https:///blog/practical-explanation-naive-bayes-classifier/ https:///blog/practical-explanation-naive-bayes-classifier/ 支持向量機(jī) 支持向量機(jī)(SVM)簡介 https:///blog/introduction-to-support-vector-machines-svm/ 支持向量機(jī)(斯坦福CS229) http://cs229./notes/cs229-notes3.pdf 線性分類:支持向量機(jī),,Softmax http://cs231n./linear-classify/ 反向傳播 你應(yīng)該了解的backprop (/@karpathy) 你能給出神經(jīng)網(wǎng)絡(luò)反向傳播算法的直觀解釋嗎? https://github.com/rasbt/python-machine-learning-book/blob/master/faq/visual-backpropagation.md 反向傳播算法的工作原理 http:///chap2.html 通過時(shí)間反向傳播和消失的漸變 http://www./2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/ http://www./2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/ 時(shí)間反向傳播的簡單介紹 http:///gentle-introduction-backpropagation-time/ 反向傳播,,直覺(斯坦福CS231n) http://cs231n./optimization-2/ 深度學(xué)習(xí) YN2深度學(xué)習(xí)指南 http://cs231n./optimization-2/ 深度學(xué)習(xí)論文閱讀路線圖 https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap Nutshell中的深度學(xué)習(xí) http:///2014/12/29/deep-learning-in-a-nutshell/ 深度學(xué)習(xí)教程 http://ai./~quocle/tutorial1.pdf 什么是深度學(xué)習(xí),? http:///what-is-deep-learning/ 人工智能,機(jī)器學(xué)習(xí)和深度學(xué)習(xí)之間有什么區(qū)別,? https://blogs./blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/ 深度學(xué)習(xí)–簡單介紹 https://gluon./ 最優(yōu)化和降維 數(shù)據(jù)降維減少的七種技術(shù) https://www./blog/seven-techniques-for-data-dimensionality-reduction 主成分分析(斯坦福CS229) http://cs229./notes/cs229-notes10.pdf Dropout:一種改善神經(jīng)網(wǎng)絡(luò)的簡單方法http:///site/normal_dl/tag=741100/nips2012_hinton_networks_01.pdf 如何訓(xùn)練你的深度神經(jīng)網(wǎng)絡(luò)? http://rishy./ml/2017/01/05/how-to-train-your-dnn/ 長短期記憶(LSTM) 長短期記憶網(wǎng)絡(luò)的通俗介紹 http:///gentle-introduction-long-short-term-memory-networks-experts/ 了解LSTM 神經(jīng)網(wǎng)絡(luò)Networks http://colah./posts/2015-08-Understanding-LSTMs/ 探索LSTM http://blog./2017/05/30/exploring-lstms/ 任何人都可以學(xué)習(xí)用Python編寫LSTM-RNN http://iamtrask./2015/11/15/anyone-can-code-lstm/ http://iamtrask./2015/11/15/anyone-can-code-lstm/ 卷積神經(jīng)網(wǎng)絡(luò)(CNN) 卷積網(wǎng)絡(luò)介紹 http:///chap6.html 深度學(xué)習(xí)和卷積神經(jīng)網(wǎng)絡(luò) https:///@ageitgey/machine-learning-is-fun-part-3-deep-learning-and-convolutional-neural-networks-f40359318721 Conv Nets:模塊化視角 http://colah./posts/2014-07-Conv-Nets-Modular/ 了解卷積 http://colah./posts/2014-07-Understanding-Convolutions/ 遞歸神經(jīng)網(wǎng)絡(luò)(RNN) 遞歸神經(jīng)網(wǎng)絡(luò)教程 http://www./2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/ 注意和增強(qiáng)的遞歸神經(jīng)網(wǎng)絡(luò) http:///2016/augmented-rnns/ 遞歸神經(jīng)網(wǎng)絡(luò)的不合理有效性 http://karpathy./2015/05/21/rnn-effectiveness/ 深入了解遞歸神經(jīng)網(wǎng)絡(luò) http:///2015/01/11/a-deep-dive-into-recurrent-neural-networks/ http:///2015/01/11/a-deep-dive-into-recurrent-neural-networks/ 強(qiáng)化學(xué)習(xí) 強(qiáng)化學(xué)習(xí)初學(xué)者入門及其實(shí)施指南 https://www./blog/2017/01/introduction-to-reinforcement-learning-implementation/ 強(qiáng)化學(xué)習(xí)教程 https://web./~gosavia/tutorial.pdf 學(xué)習(xí)強(qiáng)化學(xué)習(xí) http://www./2016/10/learning-reinforcement-learning/ 深度強(qiáng)化學(xué)習(xí):來自像素的乒乓球 http://karpathy./2016/05/31/rl/ 生成對抗網(wǎng)絡(luò)(GAN) 對抗機(jī)器學(xué)習(xí)簡介 https://aaai18adversarial./slides/AML.pptx 什么是生成性對抗網(wǎng)絡(luò),? https://blogs./blog/2017/05/17/generative-adversarial-network/ 濫用生成對抗網(wǎng)絡(luò)制作8位像素藝術(shù) https:///@ageitgey/abusing-generative-adversarial-networks-to-make-8-bit-pixel-art-e45d9b96cee7 Generative Adversarial Networks簡介(TensorFlow中的代碼) http://blog./introduction-generative-adversarial-networks-code-tensorflow/ 初學(xué)者的生成對抗網(wǎng)絡(luò) https://www./learning/generative-adversarial-networks-for-beginners 多任務(wù)學(xué)習(xí) 深度神經(jīng)網(wǎng)絡(luò)中多任務(wù)學(xué)習(xí)概述 http:///multi-task/index.html NLP 自然語言處理很有趣! https:///@ageitgey/natural-language-processing-is-fun-9a0bff37854e 自然語言處理神經(jīng)網(wǎng)絡(luò)模型入門 http://u.cs./~yogo/nnlp.pdf 自然語言處理權(quán)威指南 https:///blog/the-definitive-guide-to-natural-language-processing/ 自然語言處理簡介 https://blog./introduction-natural-language-processing-nlp/ 自然語言處理教程 http://www./blog/natural-language-processing-tutorial/ 自然語言處理(NLP)來自Scratch https:///pdf/1103.0398.pdf 深度學(xué)習(xí)和NLP 深度學(xué)習(xí)適用于NLP https:///pdf/1703.03091.pdf NLP的深度學(xué)習(xí)(沒有魔法) https://nlp./courses/NAACL2013/NAACL2013-Socher-Manning-DeepLearning.pdf 了解NLP的卷積神經(jīng)網(wǎng)絡(luò) http://www./2015/11/understanding-convolutional-neural-networks-for-nlp/ 深度學(xué)習(xí),、NLP,、表示 http://colah./posts/2014-07-NLP-RNNs-Representations/ 最先進(jìn)的NLP模型的新深度學(xué)習(xí)公式:嵌入、編碼,、參與,、預(yù)測 https:///blog/deep-learning-formula-nlp 使用Torch深度神經(jīng)網(wǎng)絡(luò)進(jìn)行自然語言處理 https://devblogs./parallelforall/understanding-natural-language-deep-neural-networks-using-torch/ 使用Pytorch進(jìn)行深度學(xué)習(xí)NLP http://pytorch.org/tutorials/beginner/deep_learning_nlp_tutorial.html 詞向量 使用詞袋模型解決電影評論分類 https://www./c/word2vec-nlp-tutorial 詞嵌入介紹第一部分,第二部分,,第三部分 http:///word-embeddings-1/index.html http:///word-embeddings-softmax/index.html http:///secret-word2vec/index.html 詞向量的驚人力量 https://blog./2016/04/21/the-amazing-power-of-word-vectors/ word2vec參數(shù)學(xué)習(xí)解釋 https:///pdf/1411.2738.pdf Word2Vec教程-?Skip-Gram模型,,負(fù)抽樣 http:///2016/04/19/word2vec-tutorial-the-skip-gram-model/ http:///2017/01/11/word2vec-tutorial-part-2-negative-sampling/ 編碼器-解碼器 深度學(xué)習(xí)和NLP中的注意力機(jī)制和記憶力模型 http://www./2016/01/attention-and-memory-in-deep-learning-and-nlp/ 序列模型 使用神經(jīng)網(wǎng)絡(luò)進(jìn)行序列學(xué)習(xí) https://papers./paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf 機(jī)器學(xué)習(xí)很有趣第五部分:深度學(xué)習(xí)的語言翻譯和序列的魔力 https:///@ageitgey/machine-learning-is-fun-part-5-language-translation-with-deep-learning-and-the-magic-of-sequences-2ace0acca0aa 如何使用編碼器-解碼器LSTM來回顯隨機(jī)整數(shù)序列 http:///how-to-use-an-encoder-decoder-lstm-to-echo-sequences-of-random-integers/ tf-seq2seq https://google./seq2seq/ Python 機(jī)器學(xué)習(xí)速成課程 https://developers.google.com/machine-learning/crash-course/ 令人敬畏的機(jī)器學(xué)習(xí) https://github.com/josephmisiti/awesome-machine-learning 使用Python掌握機(jī)器學(xué)習(xí)的7個(gè)步驟 http://www./2015/11/seven-steps-machine-learning-python.html 一個(gè)示例機(jī)器學(xué)習(xí)筆記 http://nbviewer./github/rhiever/Data-Analysis-and-Machine-Learning-Projects/blob/master/example-data-science-notebook/Example%20Machine%20Learning%20Notebook.ipynb 使用Python進(jìn)行機(jī)器學(xué)習(xí) https://www./machine_learning_with_python/machine_learning_with_python_quick_guide.htm 實(shí)戰(zhàn)案例 如何在Python中從頭開始實(shí)現(xiàn)感知器算法 http:///implement-perceptron-algorithm-scratch-python/ 在Python中使用Scratch實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò) http://www./2015/09/implementing-a-neural-network-from-scratch/ 使用11行代碼在Python中實(shí)現(xiàn)神經(jīng)網(wǎng)絡(luò) http://iamtrask./2015/07/12/basic-python-network/ 使用Python實(shí)現(xiàn)你自己的k-Nearest Neighbor算法 http://www./2016/01/implementing-your-own-knn-using-python.html 來自Scatch的ML https://github.com/eriklindernoren/ML-From-Scratch Python機(jī)器學(xué)習(xí)(第2版)代碼庫 https://github.com/rasbt/python-machine-learning-book-2nd-edition Scipy和numpy Scipy講義 http://www./ Python Numpy教程 http://cs231n./python-numpy-tutorial/ Numpy和Scipy簡介 https://engineering./~shell/che210d/numpy.pdf Python中的科學(xué)家速成課程 http://nbviewer./gist/rpmuller/5920182 http://nbviewer./gist/rpmuller/5920182 scikit學(xué)習(xí) PyCon scikit-learn教程索引 http://nbviewer./github/jakevdp/sklearn_pycon2015/blob/master/notebooks/Index.ipynb scikit-learn分類算法 https://github.com/mmmayo13/scikit-learn-classifiers/blob/master/sklearn-classifiers-tutorial.ipynb scikit-learn教程 http:///stable/tutorial/index.html 簡短的scikit-learn教程 https://github.com/mmmayo13/scikit-learn-beginners-tutorials Tensorflow Tensorflow教程 https://www./tutorials/ TensorFlow簡介 - CPU與GPU https:///@erikhallstrm/hello-world-tensorflow-649b15aed18c TensorFlow https://blog./tensorflow-a-primer-4b3fa0978be3 Tensorflow中的RNN http://www./2016/08/rnns-in-tensorflow-a-practical-guide-and-undocumented-features/ 在TensorFlow中實(shí)現(xiàn)CNN進(jìn)行文本分類 http://www./2015/12/implementing-a-cnn-for-text-classification-in-tensorflow/ 如何使用TensorFlow運(yùn)行文本摘要 http://pavel./2016/10/15/how-to-run-text-summarization-with-tensorflow/ PyTorch PyTorch教程 http://pytorch.org/tutorials/ PyTorch的簡單介紹 http://blog./2017/04/24/a-gentle-intro-to-pytorch/ 教程:PyTorch中的深度學(xué)習(xí) https://iamtrask./2017/01/15/pytorch-tutorial/ PyTorch示例 https://github.com/jcjohnson/pytorch-examples PyTorch教程 https://github.com/MorvanZhou/PyTorch-Tutorial 深度學(xué)習(xí)研究人員的PyTorch教程 https://github.com/yunjey/pytorch-tutorial 數(shù)學(xué) 機(jī)器學(xué)習(xí)數(shù)學(xué) https://people./~praman1/static/pub/math-for-ml.pdf 機(jī)器學(xué)習(xí)數(shù)學(xué) http://www.umiacs./~hal/courses/2013S_ML/math4ml.pdf 線性代數(shù) 線性代數(shù)直觀指南 https:///articles/linear-algebra-guide/ 程序員對矩陣乘法的直覺 https:///articles/matrix-multiplication/ 了解Cross產(chǎn)品 https:///articles/cross-product/ 了解Dot產(chǎn)品 https:///articles/vector-calculus-understanding-the-dot-product/ 用于機(jī)器學(xué)習(xí)的線性代數(shù)(布法羅大學(xué)CSE574)http://www.cedar./~srihari/CSE574/Chap1/LinearAlgebra.pdf 用于深度學(xué)習(xí)的線性代數(shù)備忘單 https:///towards-data-science/linear-algebra-cheat-sheet-for-deep-learning-cd67aba4526c 線性代數(shù)評論與參考 http://cs229./section/cs229-linalg.pdf 概率論 用比率理解貝葉斯定理 https:///articles/understanding-bayes-theorem-with-ratios/ 概率論入門 http://cs229./section/cs229-prob.pdf 機(jī)器學(xué)習(xí)的概率論教程 https://see./materials/aimlcs229/cs229-prob.pdf 概率論(布法羅大學(xué)CSE574) http://www.cedar./~srihari/CSE574/Chap1/Probability-Theory.pdf 機(jī)器學(xué)習(xí)的概率論(多倫多大學(xué)CSC411) http://www.cs./~urtasun/courses/CSC411_Fall16/tutorial1.pdf 微積分 如何理解導(dǎo)數(shù):商數(shù)規(guī)則,指數(shù)和對數(shù) https:///articles/how-to-understand-derivatives-the-quotient-rule-exponents-and-logarithms/ 如何理解導(dǎo)數(shù):產(chǎn)品,,動(dòng)力和鏈條規(guī)則 () https:///articles/derivatives-product-power-chain/ 矢量微積分:了解漸變 https:///articles/vector-calculus-understanding-the-gradient/ 微分學(xué)(斯坦福CS224n) http://web./class/cs224n/lecture_notes/cs224n-2017-review-differential-calculus.pdf 微積分概述 http://ml-cheatsheet./en/latest/calculus.html 本文由阿里云云棲社區(qū)組織翻譯,。 文章原標(biāo)題《over-200-of-the-best-machine-learning-nlp-and-python-tutorials-2018-edition》 作者:Robbie Allen |
|