There are a number of technical maths questions that are known to be ‘undecidable’ when a statement cannot be proved either true or false using standard mathematical language. Recently a team of mathematicians proved that “Learnability can be Undecidable” where “learnability” is defined as the ability to make predictions about a large data set by sampling a small number of data points. At the end it comes to a question whether an algorithm can generalize its knowledge. For instance, given the answer to a ‘yes or no’ question “Does this image show a cat?” for a limited number of objects, an algorithm should decide the answer for new objects. Link
While investigating the connection between learnability and ‘compression’, which involves finding a way to summarize the salient features of a large set of data in a smaller set of data, the authors discovered that the information’s ability to be compressed efficiently is linked to a paradox known as the continuum hypothesis. In particular, it relates to the different sizes of data sets containing infinitely many objects. The latest result appeared to be important for the theory of machine learning, although it is not clear if it will have much impact on the practice. Read more
You must be logged in to post a comment.