In Valiant’s model of evolution, a class of representations is evolvable iff a polynomial-time process of random mutations guided by selection converges with high probability to a representation as $\epsilon$-close as desired from the optimal one, for any required $\epsilon>0$. Several previous positive results exist that can be related to evolving a vector space, but each former result imposes restrictions either on (re)initialisations, distributions, performance functions and/or the mutator. In this paper, we show that all it takes to evolve a complete normed vector space is merely a set that generates the space. Furthermore, it takes only $\tilde{O}(1/\epsilon^2)$ steps and it is essentially strictly monotonic, agnostic and handles target drifts that rival some proven in fairly restricted settings. In the context of the model, we bring to the fore new results not documented previously. Evolution appears to occur in a mean-divergence model reminiscent of Markowitz mean-variance model for portfolio selection, and the risk-return efficient frontier of evolution shows an interesting pattern: when far from the optimum, the mutator always has access to mutations close to the efficient frontier. Toy experiments in supervised and unsupervised learning display promising directions for this scheme to be used as a (new) provable gradient-free stochastic optimisation algorithm.

## Distribution-free Evolvability of Vector Spaces: All it takes is a Generating Set

Abstract · Apr 10, 2017 04:41 · Share on Twitter