This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.Expand

This work proposes a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks, which provides a new approximate convergence measure, fast and stable training and high visual quality.Expand

This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator.Expand

A fundamental tradeoff between the amount of test error and the average distance to nearest error is shown, which proves that any model which misclassifies a small constant fraction of a sphere will be vulnerable to adversarial perturbations of size O(1/\sqrt{d})$.Expand

This work proposes Guided Evolutionary Strategies, a method for optimally using surrogate gradient directions along with random search, and defines a search distribution for evolutionary strategies that is elongated along a guiding subspace spanned by the surrogate gradients.Expand

This paper shows how Q-values and policies over continuous spaces can be modeled using a next step prediction model over discretized dimensions, and demonstrates empirically that the method can perform global search, which effectively gets around the local optimization issues that plague DDPG.Expand

This work proposes a training scheme which overcomes both of these difficulties, by dynamically weighting two unbiased gradient estimators for a variational loss on optimizer performance, allowing us to train neural networks to perform optimization of a specific task faster than tuned first-order methods.Expand

This work meta-learned an algorithm -- an unsupervised weight update rule -- that produces representations that perform well under this meta-objective, which enables it to generalize to novel neural network architectures.Expand

This work target semi-supervised classification performance, and meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task that is constrain to be a biologically-motivated, neuron-local function which enables it to generalize to different neural network architectures, datasets, and data modalities.Expand

A necessary condition for an evaluation metric not to behave this way is clarified: estimating the function must require a large sample from the model, so the resulting benchmarks cannot be "won" by training set memorization, while still being perceptually correlated and computable only from samples.Expand