THE LOSS FUNCTION 中文是什么意思 - 中文翻译

[ðə lɒs 'fʌŋkʃn]
[ðə lɒs 'fʌŋkʃn]
loss function

在 英语 中使用 The loss function 的示例及其翻译为 中文

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
We try to minimize the loss function:.
我们构造出以下的lossfunction:.
In other words, the loss function doesn't correlate with image quality.
换句话说,损失函数与图像质量不相关。
Cross entropy serves as the loss function.
并使用cross-entropy作为lossfunction
This goes to the loss function part of backpropagation.
这转到损失功能反向传播的一部分。
We could, for example, add a reguralization term in the loss function.
例如,我们可以在损失函数中添加一个修正术语。
人们也翻译
If it is equal to the loss function, SGD will converge to a global minimum.
如果它等于损失函数,SGD将收敛到全局最小值。
Our goal in training is to find the best set of weights andbiases that minimizes the loss function.
在训练过程中,我们的目标是找到一组最佳的权重和偏置,使损失函数最小化。
So, to define the loss function, let's take the max between this and zero.
所以为了定义这个损失函数,我们取这个和0的最大值:.
Here it says that the L2Loss operation that we used in the loss function node is not available on iOS.
在这里,我们损失函数结点的L2Loss操作在iOS上是不可用的。
You can optimize the loss function using optimization methods like L-BFGS or even SGD.
你可以使用L-BFGS甚至SGD等优化方法优化损失函数
We need to find a way to somehow navigate to the bottom of the"valley" to point B,where the loss function has a minima?
我们需要找到一种到达“谷底”B点(损失函数最小值)的方法。?
According to[3], the loss function$L_loc$ in the regression task is defined as follows.
根据[3],回归任务中的损失函数L_loc定义如下。
We want our outputs to be in the same format as ourinputs so we can compare our results using the loss function.
我们希望我们的输出与我们的输入格式相同,我们可以使用损失函数来比较我们的结果。
Generally speaking, the loss function is designed to show how far we are from the‘ideal' solution.”.
一般来说,损失函数用来衡量我们离「理想」的解还有多远。
Starting from initial random weights, multi-layer perceptron(MLP)minimizes the loss function by repeatedly updating these weights.
从初始随机权重开始,多层感知器(MLP)通过重复更新这些权重来最小化损失函数
Generally speaking, the loss function is designed to show how far we are from the‘ideal' solution.”.
通常来讲,使用损失函数的目的就是展示我们离“理想”情况的差距。
Furthermore, extensive ablation evaluations areconducted to demonstrate the effectiveness of different terms of the loss function.
此外,还进行了大量的消融评估,以证明不同损失函数项的有效性。
The loss function compares the predicted outcome y_pred with the correct outcome y.
损失函数将预测结果y_pred与正确的结果y进行比较。
In NMF,L1 and L2 priors can be added to the loss function in order to regularize the model.
在NMF中,L1和L2先验可以被添加到损失函数中以正规化模型。
The loss function compares the predicted outcome y_pred with the correct outcome y.
Loss函数会将预测结果y_pred与正确输出结果y进行比较。
The cost function is the average of the Loss function over the entire training set.
代价函数是整个训练集的损失函数的平均值。
The loss function and accuracy calculation here are not substantially different from those used in image classification.
这里的损失函数和准确率计算与图像分类中的并没有本质上的不同。
When MAE(mean absolute error) is the loss function, the median would be used as F0(x) to initialize the model.
当平均绝对误差(MAE)是损失函数时,中值将被用作F0(x)来初始化模型。
The result is a 3x2 matrix dLoss/dW2,which will update the original W2 values in a direction that minimizes the Loss function.
最终可得到3x2矩阵dLoss/dW2,以在最小化损失函数的方向更新原始的W2值.
The value of the loss function tells us how far from perfect the performance of our network on a given dataset is.
损失函数的值为我们提供了网络在给定数据集上的表现离完美有多远的测度。
Notice in our hidden layer, we added an l1 activity regularizer,that will apply a penalty to the loss function during the optimization phase.
注意在我们的正则项中,我们添加了一个l1激活函数正则器,它将在优化阶段对损失函数应用一个惩罚。
However, the loss function we minimise during image synthesis contains two terms for content and style respectively, that are well separated(see Methods).
但是,在图像合成中最小化的损失函数分别包括了内容与风格两者,它们被很好地分开了。
(c) If training, calculate an Expression representing the loss function, and use its backward() function to perform back-propagation.
(c)如果训练的话,计算损失函数的表达式,并使用它的backward()函数来进行反向传播。
One-dimensional optimization- Although the loss function mainly depends on many parameters, not just one, one-dimensional optimization methods are of great importance here.
一维优化方法虽然损失函数取决于许多参数,一维优化方法在这里非常重要。
Although optimization provides a way to minimize the loss function for deep learning, in essence, the goals of optimization and deep learning are fundamentally different.
虽然优化为深度学习提供了最小化损失函数的方法,但本质上,优化与深度学习的目标是有区别的。
结果: 45, 时间: 0.033

单词翻译

顶级字典查询

英语 - 中文