SOFTMAX 英语是什么意思 - 英语翻译

名词
softmax
一个softmax

在 中文 中使用 Softmax 的示例及其翻译为 英语

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
所以softmax可以有效解决这个问题。
So SoftMax function can resolve the issue to a very extent.
第五步是将每个值向量与softmax得分相乘(为对其求和做准备)。
The fifth step is to multiply each value vector by the softmax score(in preparation to sum them up).
这使softmax函数使用梯度下降的优化时很有用。
This makes the softmax function useful for optimization techniques that use gradient descent.
基本的skip-gram模型定义了经过softmax函数计算的概率p。
The basic skip-gram model defines the probability through the softmax function.
具体地说,网络“最后一层”的特征会进入softmax分类器。
Specifically, we consider the"lastlayer" of the network to be the features that goes into the softmax classifier.
请记住,交叉熵涉及在softmax层的输出上计算的日志。
Remember that the cross-entropy involves a log, computed on the output of the softmax layer.
注:还有其他方式可以表示输出,但我只是显示softmax方法。
(Side note: There are other ways that you can represent the output,but I am just showing the softmax approach.).
参数:dim(int)-这是将计算Softmax的那个维度(所以每个沿着dim的切片和为1).
Dim(int)- A dimension along which Softmax will be computed(so every slice along dim will sum to 1).
之后,softmax函数会生成一个(标准化)概率向量,对应于每个可能的类别。
The softmax function then generates a vector of(normalized) probabilities with one value for each possible class.
最后一层使用Softmax激活函数,而不是默认的Sigmoid函数。
The third layer is using the softmax activation function rather than the default sigmoid function.
许多权重的层被非线性函数(sigmoid、tanh、relu+softmax和炫酷的selu)分隔了。
Many layers of weights separated by non-linearities(sigmoid, tanh, relu+ softmax and the cool new selu).
在该步骤时,对于模型其余部分的参数,wordvectors:W以及softmax的权重,是固定的。
During this step the parameters for the rest of the model(word vectors W and softmax weights U and b) are fixed.
许多层的重量由非线性分开(sigmoid,tanh,relu+softmax和coolnewselu)。
Many layers of weights separated by non-linearities(sigmoid, tanh, relu+ softmax and the cool new selu).
接下来是带有多个滤波器的卷积层,然后是最大池层,最后是softmax分类器。
That's followed by a convolutional layer with multiple filters, then a max-pooling layer,and finally a softmax classifier.
许多权重的层被非线性函数(sigmoid、tanh、relu+softmax和炫酷的selu)分隔了。
Many layers of scales are separated by non-linearities(sigmoid, tanh,Rectifier+ softmax and cool new selu).
不过,我想首先简要的介绍一下解决这个问题的另一种方法,这种方法是基于神经元中所谓的softmax层。
However, I want to briefly describe another approach to the problem,based on what are called softmax layers of neurons.
许多层的重量由非线性分开(sigmoid,tanh,relu+softmax和coolnewselu)。
Many layers of scales are separated by non-linearities(sigmoid, tanh,Rectifier+ softmax and cool new selu).
我们最终应用softmax激活函数并且得到一个描述单层神经网络的公式,并将其应用于100张图像:.
We finally apply the softmax activation function and obtain the formula describing a 1-layer neural network, applied to 100 images:.
这种三重损失的定义往往被称为SoftMax比,并且最早是由Ailonetal.[2]提出的。
This definition of the triplet-loss is often referred to as the SoftMax ratio and was proposed originally by Ailon et al.[2].
它最后也本该有一个softmax层,但是因为BNNS没有使用softmax函数,所以我把它去掉了。
It would also have a softmax layer at the end, but because BNNS doesn't come with a softmax function I left it out.
由于大多数的词语只需要相对较少的参数,计算softmax的复杂度得到降低,训练速度因此提升。
As many words will only require comparatively few parameters,the complexity of computing the softmax is reduced, which speeds up training.
如果您已经知道MNIST是什么,以及softmax(多项式逻辑)回归是什么,那么您可以跳到节奏更快的教程。
If you already know what MNIST is, and what softmax(multinomial logistic) regression is, you might prefer this faster paced tutorial.
现在,改变权重衰减影响了logits的缩放,有效充当了softmax函数的temperature参数。
Now, varying weight decay influences the scaling of the logits,effectively acting as a temperature parameter for the softmax function.
最软层(softmaxlayer)被忽略,因为完全连接层的输出将成为另一个RNN的输入。
The softmax layer is disregarded as the outputs of the fully connected layer become the inputs to another RNN.
从游戏体验的角度来讲,Softmax可以使用UE3创建强大的加载系统,使得可以创建巨大的环境。
From a gameplay perspective, Softmax was able to use UE3 to create a powerful loading system that allowed for the creation of huge environments.
我们将在顶部使用三个卷积层,我们的传统softmax读出层在底部,并连接到一个完全连接的层:.
We will use three convolutional layers at the top, our traditional softmax readout layer at the bottom and connect them with one fully-connected layer:.
如果你已经知道MNIST是什么,以及softmax(多项逻辑斯蒂)regression,那么你可能更喜欢这个快速入门教程。
If you already know what MNIST is, and what softmax(multinomial logistic) regression is, you might prefer this faster paced tutorial.
问题逆转softmax层假设我们有一个使用softmax输出层的神经网络,然后激活值a_j^L已知。
Inverting the softmax layer Suppose we have a neural network with a softmax output layer, and the activations$a^L_j$ are known.
因为我们可以发现,我们仍然计算softmax的分母,但是用提议分布Q来替换分母的标准化。
As we can see, we still compute the numerator of the softmax, but replace the normalisation in the denominator with the proposal distribution(Q).
包含softmax和log-likelihood的反向传播-在上一章,我们推导了包含sigmoid层的网络的反向传播算法。
Backpropagation with softmax and the log-likelihood cost In the last chapter we derived the backpropagation algorithm for a network containing sigmoid layers.
结果: 147, 时间: 0.0177

顶级字典查询

中文 - 英语