ReLU is so dominant in the field because it embraces the fac

ReLU is so dominant in the field because it embraces the fact that all a neural network does is slice and dice the input space, linear transformation after linear transformation, hyperplane after hyperplane, layer after layer.

Piecewise linearity is maybe the only useful non linearity that can happen in a neural network.

Nothing smooth to see there. If you need some smoothness maybe you should go back to feature engineering it.