The neurosciences present some of the steepest challenges to machine learning. Among diverse problem settings and approaches, certain commonalities can be identified. Nearly always there is a very high-dimensional input structure - particularly relative to the number of exemplars, since each data point is usually gathered at a high cost in time and money. To avoid overfitted solutions, inference must therefore make considerable use of domain knowledge from physics, neurophysiology and anatomy. Solutions typically occupy a relatively small subspace of the input representation, the rest being made up of noise that may be of much larger magnitude (often composed largely of the manifestations of other neurophysiological processes, besides the ones of interest). In finding generalizable solutions, one usually has to contend with a high degree of variability, both between individuals and across time, leading to problems of covariate shift and non-stationarity. In all cases, even the high-dimensional raw input is a vastly simplified reflection of the underlying processes and structures. New ways of measuring relevant information, and new ways of transforming the data, are therefore still waiting to be found, leading to feature representations that are more relevant, less noisy, or more transferrable between experimental sessions and subjects.