Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares(OLS) solution.
过量的数据增强,加上其它形式的正则化(权重L2,dropout操作,等等)可能会导致网络欠拟合。
Too much of this combined with other forms of regularization(weight L2, dropout, etc.) can cause the net to underfit.
在你的数据上应用正则化LDA,之后再将其应用于你的模型。
Apply regularized IDA on the data before you feed it to your model.
为了避免过度拟合,正则化技术(L1,L2)被用来对较大的w1,w2等值进行惩罚。
To avoid overfitting, regularization technique(L1 and L2) is used to penalize large value of w1, w2….
正则化的线性模型对于特征理解和特征选择来说是非常强大的工具。
Regularized linear models are a powerful set of tool for feature interpretation and selection.
解决正则化问题的一种方法是对邻域使用多个权重向量。
One method to address the regularization problem is to use multiple weight vectors in each neighborhood.
在正则化稀疏解决方案中,确保向量x的每个分量都非常有效。
In the regularized sparse solution, you ensure that every part of the vector x is extremely capable.
但是正则化是一种常见的机器学习技术,所以我以为我会包括它。
But regularization is a common machine learning technique, so I thought I would include it.
但是,SciPy的不完全伽马函数gammainc对应正则化伽玛函数。
However, SciPy's incomplete gamma function gammainc corresponds to the regularized gamma function.
于是Bosque诞生了,它代表了一种编程范式,Marron在他的一篇论文中称之为“正则化编程”。
The result is Bosque, which represents a programming paradigm that Marron, in a paper he wrote,calls"regularized programming.".
Regression regularization methods(Lasso, Ridge and ElasticNet) works well in case of high dimensionality and multicollinearity among the variables in the data set.
如果我们减少了正则化,模型会更好地拟合训练数据,所以,就会增加variance,降低bias。
If we decrease regularization, the model will fit training data better, and, as a consequence, the variance will increase and the bias will decrease.
Lu& Zheng(2017)[42] proposed a regularized skip-gram model for learning such cross-domain embeddings.
SGD需要许多超参数,例如正则化参数和迭代次数.
SGD requires a number of hyperparameters such as the regularization parameter and the number of iterations.
正则化可以通过限制假设空间H{\displaystyle{\mathcal{H}来完成。
Regularization can be accomplished by restricting the hypothesis space H{\displaystyle{\mathcal{H}}}.
广泛地讲,它包括有监督和无监督学习,线性和逻辑回归,正则化和朴素贝叶斯。
Broadly, it covers supervised and unsupervised learning,linear and logistic regression, regularization, and Naïve Bayes.
然而,与Perceptron(感知器)相反,它们包括正则化参数C。
However, contrary to the Perceptron, they include a regularization parameter C.
书中对CNN进行了全面介绍,首先是神经网络的基本概念:训练、正则化和优化。
It provides a comprehensive introduction to CNNs starting with the essential concepts behind neural networks:training, regularization, and optimization of CNNs.
书中对CNN进行了全面介绍,首先是神经网络的基本概念:训练、正则化和优化。
It is said to provide a comprehensive introduction to CNNs starting with the essential concepts behind neural networks:training, regularization, and optimization of CNNs.
English
Bahasa indonesia
日本語
عربى
Български
বাংলা
Český
Dansk
Deutsch
Ελληνικά
Español
Suomi
Français
עִברִית
हिंदी
Hrvatski
Magyar
Italiano
Қазақ
한국어
മലയാളം
मराठी
Bahasa malay
Nederlands
Norsk
Polski
Português
Română
Русский
Slovenský
Slovenski
Српски
Svenska
தமிழ்
తెలుగు
ไทย
Tagalog
Turkce
Українська
اردو
Tiếng việt