Batchnorm#
Enumerations | |
enum | miopenBatchNormMode_t { miopenBNPerActivation = 0 , miopenBNSpatial = 1 } |
Functions | |
miopenStatus_t | miopenDeriveBNTensorDescriptor (miopenTensorDescriptor_t derivedBnDesc, const miopenTensorDescriptor_t xDesc, miopenBatchNormMode_t bn_mode) |
Derive tensor for gamma and beta from input tensor descriptor. More... | |
miopenStatus_t | miopenBatchNormalizationForwardTraining (miopenHandle_t handle, miopenBatchNormMode_t bn_mode, void *alpha, void *beta, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t yDesc, void *y, const miopenTensorDescriptor_t bnScaleBiasMeanVarDesc, void *bnScale, void *bnBias, double expAvgFactor, void *resultRunningMean, void *resultRunningVariance, double epsilon, void *resultSaveMean, void *resultSaveInvVariance) |
Execute forward training layer for batch normalization. More... | |
miopenStatus_t | miopenBatchNormalizationForwardTraining_V2 (miopenHandle_t handle, miopenBatchNormMode_t bn_mode, void *alpha, void *beta, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t yDesc, void *y, const miopenTensorDescriptor_t scaleDesc, const miopenTensorDescriptor_t biasVarDesc, const miopenTensorDescriptor_t savedMeanDesc, const miopenTensorDescriptor_t savedVarDesc, void *bnScale, void *bnBias, double expAvgFactor, void *resultRunningMean, void *resultRunningVariance, double epsilon, void *resultSaveMean, void *resultSaveInvVariance) |
Execute forward training layer for batch normalization. More... | |
miopenStatus_t | miopenBatchNormalizationForwardInference (miopenHandle_t handle, miopenBatchNormMode_t bn_mode, void *alpha, void *beta, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t yDesc, void *y, const miopenTensorDescriptor_t bnScaleBiasMeanVarDesc, void *bnScale, void *bnBias, void *estimatedMean, void *estimatedVariance, double epsilon) |
Execute forward inference layer for batch normalization. More... | |
miopenStatus_t | miopenBatchNormalizationForwardInference_V2 (miopenHandle_t handle, miopenBatchNormMode_t bn_mode, void *alpha, void *beta, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t yDesc, void *y, const miopenTensorDescriptor_t scaleDesc, const miopenTensorDescriptor_t biasDesc, const miopenTensorDescriptor_t estMeanDesc, const miopenTensorDescriptor_t estVarianceDesc, void *bnScale, void *bnBias, void *estimatedMean, void *estimatedVariance, double epsilon) |
Execute forward inference layer for batch normalization. More... | |
miopenStatus_t | miopenBatchNormalizationBackward (miopenHandle_t handle, miopenBatchNormMode_t bn_mode, const void *alphaDataDiff, const void *betaDataDiff, const void *alphaParamDiff, const void *betaParamDiff, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t dyDesc, const void *dy, const miopenTensorDescriptor_t dxDesc, void *dx, const miopenTensorDescriptor_t bnScaleBiasDiffDesc, const void *bnScale, void *resultBnScaleDiff, void *resultBnBiasDiff, double epsilon, const void *savedMean, const void *savedInvVariance) |
Execute backwards propagation layer for batch normalization. More... | |
miopenStatus_t | miopenBatchNormalizationBackward_V2 (miopenHandle_t handle, miopenBatchNormMode_t bn_mode, const void *alphaDataDiff, const void *betaDataDiff, const void *alphaParamDiff, const void *betaParamDiff, const miopenTensorDescriptor_t xDesc, const void *x, const miopenTensorDescriptor_t dyDesc, const void *dy, const miopenTensorDescriptor_t dxDesc, void *dx, const miopenTensorDescriptor_t scaleDesc, const miopenTensorDescriptor_t biasDesc, const miopenTensorDescriptor_t savedMeanDesc, const miopenTensorDescriptor_t savedVarDesc, const void *bnScale, void *resultBnScaleDiff, void *resultBnBiasDiff, double epsilon, const void *savedMean, const void *savedInvVariance) |
Execute backwards propagation layer for batch normalization. More... | |
Detailed Description
Enumeration Type Documentation
◆ miopenBatchNormMode_t
Function Documentation
◆ miopenBatchNormalizationBackward()
miopenStatus_t miopenBatchNormalizationBackward | ( | miopenHandle_t | handle, |
miopenBatchNormMode_t | bn_mode, | ||
const void * | alphaDataDiff, | ||
const void * | betaDataDiff, | ||
const void * | alphaParamDiff, | ||
const void * | betaParamDiff, | ||
const miopenTensorDescriptor_t | xDesc, | ||
const void * | x, | ||
const miopenTensorDescriptor_t | dyDesc, | ||
const void * | dy, | ||
const miopenTensorDescriptor_t | dxDesc, | ||
void * | dx, | ||
const miopenTensorDescriptor_t | bnScaleBiasDiffDesc, | ||
const void * | bnScale, | ||
void * | resultBnScaleDiff, | ||
void * | resultBnBiasDiff, | ||
double | epsilon, | ||
const void * | savedMean, | ||
const void * | savedInvVariance | ||
) |
Execute backwards propagation layer for batch normalization.
Batch normalization pass for backwards propagation training pass. The method for backwards propagation batch normalization.
Takes in batch normalization mode bn_mode and input tensor data x, input activation tensor dy, output tensor dx, the learned tensors resultBNBiasDiff and resultBNScaleDiff with their descriptor.
If BOTH savedMean, and savedVariance are not null pointers then the method will use the saved mean and variance calculated by the forward training phase.
- Parameters
-
handle MIOpen handle (input) bn_mode Batch normalization mode (input) alphaDataDiff Floating point scaling factor, allocated on the host (input) betaDataDiff Floating point shift factor, allocated on the host (input) alphaParamDiff Floating point scaling factor, allocated on the host (input) betaParamDiff Floating point shift factor, allocated on the host (input) xDesc Tensor descriptor for data input tensor x (input) x Data tensor x (input) dyDesc Tensor descriptor for output data tensor y (input) dy Data tensor y (input) dxDesc Tensor descriptor for output data tensor dx (input) dx Data delta tensor dx (output) bnScaleBiasDiffDesc Tensor descriptor for BN scaling, shifting, saved variance and mean (input) bnScale Batch norm scaling, gamma, tensor (input) resultBnScaleDiff Tensor for dscale (output) resultBnBiasDiff Tensor for dbias (output) epsilon Value to stabilize inverse variance calculation (input) savedMean Saved mini-batch mean for backwards pass (input) savedInvVariance Saved mini-bathc inverse variance for backwards pass (input)
- Returns
- miopenStatus_t
◆ miopenBatchNormalizationBackward_V2()
miopenStatus_t miopenBatchNormalizationBackward_V2 | ( | miopenHandle_t | handle, |
miopenBatchNormMode_t | bn_mode, | ||
const void * | alphaDataDiff, | ||
const void * | betaDataDiff, | ||
const void * | alphaParamDiff, | ||
const void * | betaParamDiff, | ||
const miopenTensorDescriptor_t | xDesc, | ||
const void * | x, | ||
const miopenTensorDescriptor_t | dyDesc, | ||
const void * | dy, | ||
const miopenTensorDescriptor_t | dxDesc, | ||
void * | dx, | ||
const miopenTensorDescriptor_t | scaleDesc, | ||
const miopenTensorDescriptor_t | biasDesc, | ||
const miopenTensorDescriptor_t | savedMeanDesc, | ||
const miopenTensorDescriptor_t | savedVarDesc, | ||
const void * | bnScale, | ||
void * | resultBnScaleDiff, | ||
void * | resultBnBiasDiff, | ||
double | epsilon, | ||
const void * | savedMean, | ||
const void * | savedInvVariance | ||
) |
Execute backwards propagation layer for batch normalization.
Batch normalization pass for backwards propagation training pass. The method for backwards propagation batch normalization.
Takes in batch normalization mode bn_mode and input tensor data x, input activation tensor dy, output tensor dx, the learned tensors resultBNBiasDiff and resultBNScaleDiff with their descriptor.
If BOTH savedMean, and savedVariance are not null pointers then the method will use the saved mean and variance calculated by the forward training phase.
- Parameters
-
handle MIOpen handle (input) bn_mode Batch normalization mode (input) alphaDataDiff Floating point scaling factor, allocated on the host (input) betaDataDiff Floating point shift factor, allocated on the host (input) alphaParamDiff Floating point scaling factor, allocated on the host (input) betaParamDiff Floating point shift factor, allocated on the host (input) xDesc Tensor descriptor for data input tensor x (input) x Data tensor x (input) dyDesc Tensor descriptor for output data tensor y (input) dy Data tensor y (input) dxDesc Tensor descriptor for output data tensor dx (input) dx Data delta tensor dx (output) scaleDesc Tensor descriptor for scaling descriptor (input) biasDesc Tensor descriptor for bias/shift descriptor (input) savedMeanDesc Tensor descriptor for saved Mean descriptor (input) savedVarDesc Tensor descriptor for saved Variance descriptor (input) , shifting, saved variance and mean (input) bnScale Batch norm scaling, gamma, tensor (input) resultBnScaleDiff Tensor for dscale (output) resultBnBiasDiff Tensor for dbias (output) epsilon Value to stabilize inverse variance calculation (input) savedMean Saved mini-batch mean for backwards pass (input) savedInvVariance Saved mini-bathc inverse variance for backwards pass (input)
- Returns
- miopenStatus_t
◆ miopenBatchNormalizationForwardInference()
miopenStatus_t miopenBatchNormalizationForwardInference | ( | miopenHandle_t | handle, |
miopenBatchNormMode_t | bn_mode, | ||
void * | alpha, | ||
void * | beta, | ||
const miopenTensorDescriptor_t | xDesc, | ||
const void * | x, | ||
const miopenTensorDescriptor_t | yDesc, | ||
void * | y, | ||
const miopenTensorDescriptor_t | bnScaleBiasMeanVarDesc, | ||
void * | bnScale, | ||
void * | bnBias, | ||
void * | estimatedMean, | ||
void * | estimatedVariance, | ||
double | epsilon | ||
) |
Execute forward inference layer for batch normalization.
Batch normalization pass for forward inference pass. Takes in batch normalization mode bn_mode and input tensor x, output tensor y, bnBias and bnScale with their descriptor.
If either estimatedMean, or estimatedVariance are null pointers then the values for the mean and variance will be calculated from input data and this calculated mean and variance will be used to update input values. If variance is zero and epsilon is also zero, this function outputs NAN values. Input espilon value should always be non zero positive value.
- Parameters
-
handle MIOpen handle (input) bn_mode Batch normalization mode (input) alpha Floating point scaling factor, allocated on the host (input) beta Floating point shift factor, allocated on the host (input) xDesc Tensor descriptor for data input tensor x (input) x Data tensor x (input) yDesc Tensor descriptor for output data tensor y (input) y Data tensor y (output) bnScaleBiasMeanVarDesc Tensor descriptor for BN scaling, shifting, saved variance and mean (input) bnScale Batch norm scaling, gamma, tensor (input) bnBias Batch norm bias, beta, tensor (input) estimatedMean Running average saved during forward training (input) estimatedVariance Running variance saved during forward training (input) epsilon Value to stabilize inverse variance calculation (input)
- Returns
- miopenStatus_t
◆ miopenBatchNormalizationForwardInference_V2()
miopenStatus_t miopenBatchNormalizationForwardInference_V2 | ( | miopenHandle_t | handle, |
miopenBatchNormMode_t | bn_mode, | ||
void * | alpha, | ||
void * | beta, | ||
const miopenTensorDescriptor_t | xDesc, | ||
const void * | x, | ||
const miopenTensorDescriptor_t | yDesc, | ||
void * | y, | ||
const miopenTensorDescriptor_t | scaleDesc, | ||
const miopenTensorDescriptor_t | biasDesc, | ||
const miopenTensorDescriptor_t | estMeanDesc, | ||
const miopenTensorDescriptor_t | estVarianceDesc, | ||
void * | bnScale, | ||
void * | bnBias, | ||
void * | estimatedMean, | ||
void * | estimatedVariance, | ||
double | epsilon | ||
) |
Execute forward inference layer for batch normalization.
Batch normalization pass for forward inference pass. Takes in batch normalization mode bn_mode and input tensor x, output tensor y, bnBias and bnScale with their descriptor.
If either estimatedMean, or estimatedVariance are null pointers then the values for the mean and variance will be calculated from input data and this calculated mean and variance will be used to update input values. If variance is zero and epsilon is also zero, this function outputs NAN values. Input espilon value should always be non zero positive value.
- Parameters
-
handle MIOpen handle (input) bn_mode Batch normalization mode (input) alpha Floating point scaling factor, allocated on the host (input) beta Floating point shift factor, allocated on the host (input) xDesc Tensor descriptor for data input tensor x (input) x Data tensor x (input) yDesc Tensor descriptor for output data tensor y (input) y Data tensor y (output) ScaleDesc Tensor descriptor for BN scaling biasVarDesc Tensor descriptor for BN bias estMeanDesc Tensor descriptor for BN estimated Mean estVarianceDesc Tensor descriptor for BN estimated Variance bnScale Batch norm scaling, gamma, tensor (input) bnBias Batch norm bias, beta, tensor (input) estimatedMean Running average saved during forward training (input) estimatedVariance Running variance saved during forward training (input) epsilon Value to stabilize inverse variance calculation (input)
- Returns
- miopenStatus_t
◆ miopenBatchNormalizationForwardTraining()
miopenStatus_t miopenBatchNormalizationForwardTraining | ( | miopenHandle_t | handle, |
miopenBatchNormMode_t | bn_mode, | ||
void * | alpha, | ||
void * | beta, | ||
const miopenTensorDescriptor_t | xDesc, | ||
const void * | x, | ||
const miopenTensorDescriptor_t | yDesc, | ||
void * | y, | ||
const miopenTensorDescriptor_t | bnScaleBiasMeanVarDesc, | ||
void * | bnScale, | ||
void * | bnBias, | ||
double | expAvgFactor, | ||
void * | resultRunningMean, | ||
void * | resultRunningVariance, | ||
double | epsilon, | ||
void * | resultSaveMean, | ||
void * | resultSaveInvVariance | ||
) |
Execute forward training layer for batch normalization.
Batch normalization pass for forward training pass. Takes in batch normalization mode bn_mode and input tensor x, output tensor y, bnBias and bnScale with their descriptor.
If either resultSaveMean, or resultSaveInvVariance are null pointers then the values for the mean and inverse variance will not be used.
Likewise, if either resultRunningMean, or resultRunningVariance are null pointers then the values for the running mean and variance will not be saved. Running averages and variances are scaled using an exponential averaging factor:
\[ \mu_{old} = \mu_{new}*factor + \mu_{old}*(1-factor) \]
where
\[ factor=1/(1+iteration) \]
- Parameters
-
handle MIOpen handle (input) bn_mode Batch normalization mode (input) alpha Floating point scaling factor, allocated on the host (input) beta Floating point shift factor, allocated on the host (input) xDesc Tensor descriptor for data input tensor x (input) x Data tensor x (input) yDesc Tensor descriptor for output data tensor y (input) y Data tensor y (output) bnScaleBiasMeanVarDesc Tensor descriptor for BN scaling, shifting, saved variance and mean (input) bnScale Batch norm scaling, gamma, tensor (input) bnBias Batch norm bias, beta, tensor (input) expAvgFactor Exponential averaging factor (input) resultRunningMean Running average saved for inference (output) resultRunningVariance Running variance saved for inference (output) epsilon Value to stablize inverse variance calculation (input) resultSaveMean Saved mini-batch mean for backwards pass (output) resultSaveInvVariance Saved mini-batch inverse variance for backwards pass (output)
- Returns
- miopenStatus_t
◆ miopenBatchNormalizationForwardTraining_V2()
miopenStatus_t miopenBatchNormalizationForwardTraining_V2 | ( | miopenHandle_t | handle, |
miopenBatchNormMode_t | bn_mode, | ||
void * | alpha, | ||
void * | beta, | ||
const miopenTensorDescriptor_t | xDesc, | ||
const void * | x, | ||
const miopenTensorDescriptor_t | yDesc, | ||
void * | y, | ||
const miopenTensorDescriptor_t | scaleDesc, | ||
const miopenTensorDescriptor_t | biasVarDesc, | ||
const miopenTensorDescriptor_t | savedMeanDesc, | ||
const miopenTensorDescriptor_t | savedVarDesc, | ||
void * | bnScale, | ||
void * | bnBias, | ||
double | expAvgFactor, | ||
void * | resultRunningMean, | ||
void * | resultRunningVariance, | ||
double | epsilon, | ||
void * | resultSaveMean, | ||
void * | resultSaveInvVariance | ||
) |
Execute forward training layer for batch normalization.
Batch normalization pass for forward training pass. Takes in batch normalization mode bn_mode and input tensor x, output tensor y, bnBias and bnScale with their descriptor.
If either resultSaveMean, or resultSaveInvVariance are null pointers then the values for the mean and inverse variance will not be used.
Likewise, if either resultRunningMean, or resultRunningVariance are null pointers then the values for the running mean and variance will not be saved. Running averages and variances are scaled using an exponential averaging factor:
\[ \mu_{old} = \mu_{new}*factor + \mu_{old}*(1-factor) \]
where
\[ factor=1/(1+iteration) \]
- Parameters
-
handle MIOpen handle (input) bn_mode Batch normalization mode (input) alpha Floating point scaling factor, allocated on the host (input) beta Floating point shift factor, allocated on the host (input) xDesc Tensor descriptor for data input tensor x (input) x Data tensor x (input) yDesc Tensor descriptor for output data tensor y (input) y Data tensor y (output) ScaleDesc Tensor descriptor for BN scaling biasVarDesc Tensor descriptor for BN bias savedMeanDesc Tensor descriptor for BN saved Mean savedVarDesc Tensor descriptor for BN saved Variance bnScale Batch norm scaling, gamma, tensor (input) bnBias Batch norm bias, beta, tensor (input) expAvgFactor Exponential averaging factor (input) resultRunningMean Running average saved for inference (output) resultRunningVariance Running variance saved for inference (output) epsilon Value to stablize inverse variance calculation (input) resultSaveMean Saved mini-batch mean for backwards pass (output) resultSaveInvVariance Saved mini-batch inverse variance for backwards pass (output)
- Returns
- miopenStatus_t
◆ miopenDeriveBNTensorDescriptor()
miopenStatus_t miopenDeriveBNTensorDescriptor | ( | miopenTensorDescriptor_t | derivedBnDesc, |
const miopenTensorDescriptor_t | xDesc, | ||
miopenBatchNormMode_t | bn_mode | ||
) |
Derive tensor for gamma and beta from input tensor descriptor.
This function takes the input tensor descriptor and outputs a derived tensor for the normalization scale (gamma) and shift (beta) tensors.
For an input tensor NCHW and spatial mode, the output derived tensor is 1C11, while for per-activation the derived tensor is 1CHW.
For an input tensor NCDHW and spatial mode, the output derived tensor is 1C111, while for per-activation the derived tensor is 1CDHW.
- Parameters
-
derivedBnDesc Output derived tensor descriptor (output) xDesc Input tensor descriptor (input) bn_mode Batch Normalization mode (input)
- Returns
- miopenStatus_t