神经网络超参调节与优化算法
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">构建完一个神经网络模型后,<span style="color: black;">怎样</span>有效运作神经网络?其<span style="color: black;">重点</span></span><span style="color: black;">内容<span style="color: black;">触及</span>超参数调优,数据集构建,以及<span style="color: black;">怎样</span><span style="color: black;">保证</span>优化算法快速运行,从而使学习算法在<span style="color: black;">恰当</span>时间内完成自主学习。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">1 数据集</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">在配置训练、验证和测试数据集的过程中<span style="color: black;">怎样</span>正确的分配数据集,则在很大程度上能够<span style="color: black;">帮忙</span><span style="color: black;">咱们</span>创建<span style="color: black;">有效</span>的神经网络。训练神经网络时,需要做出很多决策,如:</span><span style="color: black;">神经网络分多少层;每层含有多少个<span style="color: black;">隐匿</span>单元;学习速率是多少;各层采用<span style="color: black;">那些</span>激活函数。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQ7ic2mUeHRMspV5pbtelqyGeKk2UILnqku3ot36bSz3HKELrsnyhhM0Q/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">在创建新应用的过程中,不可能从一<span style="color: black;">起始</span>就准确预测出这些信息和其他超级参数。而应用深度学习是一个典型的迭代过程,需要多次循环往复,<span style="color: black;">才可</span>为应用程序找到一个<span style="color: black;">叫作</span>心的神经网络,<span style="color: black;">因此呢</span>循环该过程的效率是决定项目<span style="color: black;">发展</span>速度的一个关键<span style="color: black;">原因</span>,而创建高质量的训练数据集,验证集和测试集<span style="color: black;">亦</span>有助于<span style="color: black;">加强</span>循环效率。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">在<span style="color: black;">设备</span>学习发展的小数据量时代,<span style="color: black;">平常</span>做法是将所有数据三七分,即人们常说的70%验证集,30%测试集,<span style="color: black;">倘若</span><span style="color: black;">无</span><span style="color: black;">知道</span>设置验证集,<span style="color: black;">亦</span><span style="color: black;">能够</span><span style="color: black;">根据</span>60%训练,20%验证和20%测试集来划分。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">若数据<span style="color: black;">仅有</span>100条,1000条<span style="color: black;">或</span>1万条数据,<span style="color: black;">那样</span>上述比例划分是非常<span style="color: black;">恰当</span>的。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">然则</span>在大数据时代,<span style="color: black;">此刻</span>的数据量可能是百万级别,<span style="color: black;">那样</span>验证集和测试集占数据总量的比例会趋向于变得更小。<span style="color: black;">由于</span>验证集的目的<span style="color: black;">便是</span>验证<span style="color: black;">区别</span>的算法,检验哪种算法更有效,</span><span style="color: black;"><span style="color: black;">因此呢</span>,验证集要足够大<span style="color: black;">才可</span><span style="color: black;">评定</span>,<span style="color: black;">例如</span>2个<span style="color: black;">乃至</span>10个<span style="color: black;">区别</span>算法,并<span style="color: black;">快速</span>判断出哪种算法更有效。<span style="color: black;">咱们</span>可能不需</span><span style="color: black;">要拿出20%的数据<span style="color: black;">做为</span>验证集。<span style="color: black;">因此呢</span>,<span style="color: black;">针对</span>数据集规模<span style="color: black;">很强</span>的,验证集和测试集要<span style="color: black;">少于</span>数据总量的20%或10%。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">在<span style="color: black;">设备</span>学习中,<span style="color: black;">倘若</span><span style="color: black;">仅有</span>一个训练集和一个验证集,而</span><span style="color: black;"><span style="color: black;">无</span>独立的测试集,遇到这种<span style="color: black;">状况</span>,训练集还被人们<span style="color: black;">叫作</span>为训练集,而验证集则被<span style="color: black;">叫作</span>为测试集,<span style="color: black;">不外</span>在<span style="color: black;">实质</span>应用中,人们只是把测试集当成简单交叉验证集<span style="color: black;">运用</span>,并<span style="color: black;">无</span>完全实现该术语的功能,<span style="color: black;">由于</span><span style="color: black;">她们</span>把验证集数据过度拟合到了测试集中。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">而搭建训练验证集和测试集能够加速神经网络的集成,<span style="color: black;">亦</span><span style="color: black;">能够</span>更有效地衡量算法地偏差和方差,从而<span style="color: black;">帮忙</span><span style="color: black;">咱们</span>更<span style="color: black;">有效</span>地<span style="color: black;">选取</span>合适<span style="color: black;">办法</span>来优化算法。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQdjLaD5ibGTDQSKUib4cfvKjWiam8yvpFia5IpHQqI9YqIvITVxsxSmhN0A/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">2 偏差与方差</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQkwrPAwbgVHh3GA6ESJqAqAxRvIVr3C4Fr6Gr7XpdITYI7KtXsbWpxg/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQzYOsgNViaKIFndicV1DxVcnAyf6NJsq7HmdZERJ9tFGJv4yxGtq4ptJA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">3、正则化</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">深度学习可能存在过拟合问题——高方差,有两个<span style="color: black;">处理</span><span style="color: black;">办法</span>,一个是正则化,另一个是准备<span style="color: black;">更加多</span>的数据,这是非常<span style="color: black;">靠谱</span>的<span style="color: black;">办法</span>,但可能<span style="color: black;">没法</span>时时刻刻准备足够多的训练数据<span style="color: black;">或</span>获取<span style="color: black;">更加多</span>数据的成本很高,但正则化<span style="color: black;">一般</span>有助于避免过拟合或减少网络误差。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQaYkED2FRnlasPhtyB5esG8M2Ttqa3O3l1OFxdibQd6wWaWzhS92zXKQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQyqJfRBgt6v4j9TNBPvibH3HSq7OBxPNYQgrdIdHiciakoKYoibeW3Pibaxg/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQlaNyZnAhvopFusn7wnCa22WG6SExDbKWl0Ifuib46hSylJBjgS4MtGw/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">dropout: 随机的取掉下一神经元。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">Batch Normalization:对每一种数据的min-batch set 标准化。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHibk31YwBv9Eib3gZpgXvrMs0eqdtpiaNsolyhdLLiancoDuOkTZbGfibFow/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">Batch Normalization 是</span><span style="color: black;">对每一种数据的</span><span style="color: black;">min-batch set </span><span style="color: black;">标准化。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQjABoRgicibeemCkQJkHp4pS0GO1rUyLJULygIGafmp7J2VNP6WHcu9pA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><strong style="color: blue;">4 归一化(输入)</strong></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQgnN0jxdJib6JJjwvgiajIEJZ3ZuaMwRnNGQBic5T5ymd9aIU7siaMwHeSA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQeyfNj1TibiajLaopjkBHWYOATrWBiaRXb8CXSvQDJP991ZjWibe4T0JdqQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQMLyx3IaJskXy0UQbKibW76VPbsDcBGHH6fpc9HgbI8VXJibzykib7ZzOg/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><strong style="color: blue;">5 梯度消失/梯度爆炸</strong></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzaXYte2aXMEWmHS6HLgMjQcB3CUrcXzFdgr2mGeGwG6dl27MY0NhNjjFkOSviahgl1OR9Q8W6UAtQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">深度学习网络<span style="color: black;">重点</span>的缺点是深度网络难以训练,<span style="color: black;">详细</span><span style="color: black;">原由</span>是易梯度消失和梯度爆炸。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHD8zvibIV4wVt5VElSRfZt83BFTuG4bx1luyOR5ysXE1LgYdAHxUnGibQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">若以三层<span style="color: black;">隐匿</span>神经网络为例,则下图<span style="color: black;">暗示</span>为其易<span style="color: black;">出现</span>梯度消失的推导。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHKG8ILvLPOw4RRjAAGuwv3UUKSWm6O7ibJzvu4LU9Sa6Ak58iccwWMbHw/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6 优化算法</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><span style="color: black;">设备</span>学习的应用是一个高度依赖经验的过程,<span style="color: black;">伴同</span>着<span style="color: black;">海量</span>迭代的过程,需要训练<span style="color: black;">许多</span>模型,<span style="color: black;">才可</span>找到合适的那一个,故此,优化算法的目的是快速训练模型,加快学习算法。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6.1 Mini-batch 梯度下降法</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">把训练集分割为小一点的子集训练,这些子集被取名为mini-batch。假设每一个子集中<span style="color: black;">仅有</span>1000个样本,<span style="color: black;">那样</span>把其中的 x1</span><span style="color: black;">到x1000</span><span style="color: black;">取出来,将其<span style="color: black;">叫作</span>为<span style="color: black;">第1</span>个子训练集,<span style="color: black;">亦</span>叫做mini-batch,<span style="color: black;">而后</span>你再取出接下来的1000个样本,从x1001</span><span style="color: black;">到x2000</span><span style="color: black;">,<span style="color: black;">而后</span>再取1000个样本,以此类推</span><span style="color: black;">。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHrrWe1IzUZ7mVvHYcpZcibfmtGJBJtvOut2aBm1yVpacuEDH0HdyetmQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHiaqkCpbXBkEmP7A9iagZXJtl83YtKVphSKT71SVkcbhhekNJOBd622Ow/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJH5sFuNyIVpteCcLWw72end1ibjIhmyibaOqMVH5I0291tHANiafFkQXn9w/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6.2 指数加权平均数</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHyYmj35H5UY7FeGicmo2c5WV7uMYNw00vVleOfc4v3tqEHbja3XHKwIw/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6.3 动量梯度下降法</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">动量(Momentum): 一阶导数,梯度指数加权平均数。运行速度几乎总是快于标准的梯度下降算法,它<span style="color: black;">能够</span>加快梯度下降。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHb5Gy9TVejwH8qQXX6icfic5Piahia7Sowr3IPtMKIoWTICiaZgusyPgIwhw/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHWqwV8Rn1AZZiaC5hHEqItEhbLHfOeu5KRbHlicpILMVjAiaibEzq0GPFtQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHCtfwLJzpca9u0StefZNibDmPsswqnEbYwtzQKV4shiaBCtIcYnVCibLsA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6.4 RMSprop</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">RMSprop全<span style="color: black;">叫作</span>是root mean square prop算法,它是一阶导数,微分平均加权值的平方根,<span style="color: black;">亦</span><span style="color: black;">能够</span>加速梯度下降。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHnnB4ZJXPClQCIBjiaOiaIDcsMGQvuE96b38ibXzia58oWQPwseqmkpFV2A/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6.5 Adam 优化算法</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJH85R5M228IwhReiasUM4NsBgs4aZAHVicgu10tPnD2mcg6xNaVrJq9tnw/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHKNGv0HKWOlwJJVdvYJIbibcxBgY6XRozlXwCkZouraRSg04K5DGLGlg/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJH6R9CVJDQrd9ayBsJ8NRO8gkAwzH7dxicbQfjlqs2tjmRD3KHtRGfrKA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHc3SgdylGDbCMk2MGDcJTsECJ4lpChJ6dnVQWEpuPCRfUxmibGJibDWBA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHIX2rlhO0HqDeNFmBXsRZaibHdyyzQhhicyWe4LN8A3b7cdliasN2e5DEg/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">6.6 反向传播法</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">反向传播法是神经网络中更新权重参数的<span style="color: black;">办法</span>。</span></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHyRaJhp32ystVqMHiar2FscibMIpD9HCqSDEy7eh8rtiaxy7GXstu0fy5g/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">7 超参数调节</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHlc5hdaaAL04CokCGsS0BXUkw9ZwwjHanRQK9DYrseHTb5aoZib0O3RQ/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJH9ric9u845Zg1XhNpKNvaBXkoV3YeNaoOSJVcicOjNfJUYib3LIjpyybPw/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="https://mmbiz.qpic.cn/mmbiz_png/qrcbvH9rGkzWWjKjcRm6tibVUbJjTLrJHxEW2FdhWcZOCIzW6AFLiamPgFsiaGQ1v0DU0ucLMEv2richS816alCORA/640?tp=webp&wxfrom=5&wx_lazy=1&wx_co=1" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">8 Batch正则化</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="data:image/svg+xml,%3C%3Fxml version=1.0 encoding=UTF-8%3F%3E%3Csvg width=1px height=1px viewBox=0 0 1 1 version=1.1 xmlns=http://www.w3.org/2000/svg xmlns:xlink=http://www.w3.org/1999/xlink%3E%3Ctitle%3E%3C/title%3E%3Cg stroke=none stroke-width=1 fill=none fill-rule=evenodd fill-opacity=0%3E%3Cg transform=translate(-249.000000, -126.000000) fill=%23FFFFFF%3E%3Crect x=249 y=126 width=1 height=1%3E%3C/rect%3E%3C/g%3E%3C/g%3E%3C/svg%3E" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">8.1 归一化网络的激活函数</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="data:image/svg+xml,%3C%3Fxml version=1.0 encoding=UTF-8%3F%3E%3Csvg width=1px height=1px viewBox=0 0 1 1 version=1.1 xmlns=http://www.w3.org/2000/svg xmlns:xlink=http://www.w3.org/1999/xlink%3E%3Ctitle%3E%3C/title%3E%3Cg stroke=none stroke-width=1 fill=none fill-rule=evenodd fill-opacity=0%3E%3Cg transform=translate(-249.000000, -126.000000) fill=%23FFFFFF%3E%3Crect x=249 y=126 width=1 height=1%3E%3C/rect%3E%3C/g%3E%3C/g%3E%3C/svg%3E" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">8.2 Batch Norm 拟合进神经网络</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="data:image/svg+xml,%3C%3Fxml version=1.0 encoding=UTF-8%3F%3E%3Csvg width=1px height=1px viewBox=0 0 1 1 version=1.1 xmlns=http://www.w3.org/2000/svg xmlns:xlink=http://www.w3.org/1999/xlink%3E%3Ctitle%3E%3C/title%3E%3Cg stroke=none stroke-width=1 fill=none fill-rule=evenodd fill-opacity=0%3E%3Cg transform=translate(-249.000000, -126.000000) fill=%23FFFFFF%3E%3Crect x=249 y=126 width=1 height=1%3E%3C/rect%3E%3C/g%3E%3C/g%3E%3C/svg%3E" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="data:image/svg+xml,%3C%3Fxml version=1.0 encoding=UTF-8%3F%3E%3Csvg width=1px height=1px viewBox=0 0 1 1 version=1.1 xmlns=http://www.w3.org/2000/svg xmlns:xlink=http://www.w3.org/1999/xlink%3E%3Ctitle%3E%3C/title%3E%3Cg stroke=none stroke-width=1 fill=none fill-rule=evenodd fill-opacity=0%3E%3Cg transform=translate(-249.000000, -126.000000) fill=%23FFFFFF%3E%3Crect x=249 y=126 width=1 height=1%3E%3C/rect%3E%3C/g%3E%3C/g%3E%3C/svg%3E" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">8.3 Softmax 回归</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="data:image/svg+xml,%3C%3Fxml version=1.0 encoding=UTF-8%3F%3E%3Csvg width=1px height=1px viewBox=0 0 1 1 version=1.1 xmlns=http://www.w3.org/2000/svg xmlns:xlink=http://www.w3.org/1999/xlink%3E%3Ctitle%3E%3C/title%3E%3Cg stroke=none stroke-width=1 fill=none fill-rule=evenodd fill-opacity=0%3E%3Cg transform=translate(-249.000000, -126.000000) fill=%23FFFFFF%3E%3Crect x=249 y=126 width=1 height=1%3E%3C/rect%3E%3C/g%3E%3C/g%3E%3C/svg%3E" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">9 深度学习框架</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;"><img src="data:image/svg+xml,%3C%3Fxml version=1.0 encoding=UTF-8%3F%3E%3Csvg width=1px height=1px viewBox=0 0 1 1 version=1.1 xmlns=http://www.w3.org/2000/svg xmlns:xlink=http://www.w3.org/1999/xlink%3E%3Ctitle%3E%3C/title%3E%3Cg stroke=none stroke-width=1 fill=none fill-rule=evenodd fill-opacity=0%3E%3Cg transform=translate(-249.000000, -126.000000) fill=%23FFFFFF%3E%3Crect x=249 y=126 width=1 height=1%3E%3C/rect%3E%3C/g%3E%3C/g%3E%3C/svg%3E" style="width: 50%; margin-bottom: 20px;"></span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><strong style="color: blue;"><span style="color: black;">10 参考资料</span></strong></p>
<p style="font-size: 16px; color: black; line-height: 40px; text-align: left; margin-bottom: 15px;"><span style="color: black;">.</span><span style="color: black;">https://github.com/fengdu78/deeplearning_ai_books</span></p>
认真阅读了楼主的帖子,非常有益。 你的努力一定会被看见,相信自己,加油。
页:
[1]