BATCH SIZE 中文是什么意思 - 中文翻译

[bætʃ saiz]
[bætʃ saiz]
批量大小
batch_size
的批处理大小
批次大小
batch size

在 英语 中使用 Batch size 的示例及其翻译为 中文

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
Where B is the batch size.
其中m为batch的大小
Batch size for training the CNN is 64.
CNN训练时的Batchsize为64。
The Practical Science of Batch Size.
批量大小的实践科学.
Batch size is usually fixed during training and inference.
批次规模在训练和推断期间通常是固定的.
We also set the batch_size parameter.
我们还设置了batch_size参数:.
Batch size is usually fixed during training and inference;
批次大小在训练和推断期间通常是固定的;.
When performance starts to drop off, your batch size is too big.
当性能开始下降,那么你的批量大小就太大了。
As I increased the batch size up to 4096, the generalization gap appeared.
当我将批量尺寸增大到4096时,泛化能力下降的现象出现了。
The number of images in the batch is called batch size.
一个batch中数据的个数称为batch_size.
Batch size= the number of training examples in one forward/backward pass.
Batch_size就是在一次前向/后向传播过程用到的训练样例的数量。
The initial learning rate is 0.001 and the batch size is 256.
在训练过程中,我们设batch的大小为256,learningrate为0.001。
In general, batch size can speed up training, but it is not always fast convergence.
总的来说,批量大小可以加快训练速度,但并不总是快速收敛。
SGD prefers wide orsharp minima depending on its learning rate or batch size.
SGD优选宽或窄的依赖于其学习率或批大小的最小值。
Here's a summary of batch size experiments, which shows comparable accuracies across experiments:.
下面是批处理大小实验的总结,显示了各个实验不差上下的准确性:.
Output of nvidia-smi when training with Caffe, using a batch size of 8.
使用Caffe训练时nvidia-smi的输出,使用的批处理大小8。
The batch size is normally 32 or 64- we will use 64 since we have fairly a large number of images.
批量大小通常为32或64,因为我们有相当多的图像,所以我们将使用64。
Output of nvidia-smi while training with Caffe, using a batch size of 16.
使用Caffe训练时nvidia-smi的输出,使用的批处理大小16。
Owing to economic batch size the cost functions may have discontinuities in addition to smooth changes.
由于经济批量大小,除了平滑变化之外,成本函数可以有不连续性。
Sticking with input dimensions of 512×512 pixels,let's investigate what adjusting batch size does to accuracy.
输入尺寸仍为512×512像素,我们来研究一下调整批处理大小对准确性有何影响。
Batch size is usually fixed during training and inference; however, TensorFlow does permit dynamic batch sizes..
批量大小通常在训练与推理的过程中确定,然而TensorFlow不允许动态批量大小。
Once you have an idea of a stable configuration,you can try increasing the data rate and/or reducing the batch size.
一旦你有了一个稳定的配置,你可以去试着增加数据速率和/或者降低批大小
The batch size is a hyperparameter that defines the number of samples to work through before updating the internal model parameters.
Batch大小是一个超参数,用于定义在更新内部模型参数之前要处理的样本数。
If you have a surveillance camera,you have to process the images as they come in, so that batch size always equals 1.
如果你有监控摄像头,则必须在图像传入时对其进行处理,以便批大小始终等于1。
For example, the batch size of SGD is 1, while the batch size of a mini-batch is usually between 10 and 1000.
例如,SGD的批量大小为1,而mini-batch的批量大小通常在10-1000之间。
SGD*, PassiveAggressive*, and discrete NaiveBayes are truly online andare not affected by batch size.
SGD,PassiveAggressive和离散的NaiveBayes是真正在线的,不受batch大小的影响。
For the pre-training tasks, the batch size and the maximum path length are 50,000 and 500 respectively, the same with in the benchmark[5].
对预训练任务而言,batchsize和最大路径长度分别为50000和500,这与基准中的超参数是一样的。
For example processing images at twice the resolution as before canhave a similar effect as using four times the batch size.
例如,以两倍大小的先前分辨率来处理图像,得到的效果与用四倍批次大小相似。
Therefore, seeing SGD as adistribution moving over time showed us that learning_rate/batch_size is more meaningful than each hyperparameter separated regarding convergence and generalization.
因此,将SGD看作是一个随时间变化的分布表明,在收敛性和泛化方面,learning_rate/batch_size比每个独立的超参数更有意义。
On ResNet-50 trained in ImageNet, GN has 10.6% lowererror than its BN counterpart when using a batch size of 2;
在ImageNet上训练的ResNet-50上,GN使用批量大小为2时的错误率比BN的错误率低10.6%;
结果: 29, 时间: 0.0303

单词翻译

顶级字典查询

英语 - 中文