Objective functions usually regularize the weight parameters of the non-linear activation function rather than its biases. Which of the following is true? Check all that apply. The bias term shifts the prediction in the direction of the class distributions. Regularization slightly modifies the learning algorithm such that the model generalizes better. Bias acts like a non-uniform prior.