Do We Actually Need Dense Over-Parameterization?
In-Time Over-Parameterization in Sparse Training
Shiwei Liu 1 Lu Yin 1 Decebal Constantin Mocanu 1 2 Mykola Pechenizkiy 1
Abstract
In this paper, we introduce a new perspective on
training deep neural networks capable of state-
of-the-art performance without the need for the
expensive over-parameterization by proposing the
concept of In-Time Over-Parameterization (ITOP)
in sparse traini ...
附件列表