Undecidability in Machine Learning

There are a number of technical maths questions that are known to be ‘undecidable’ when a statement cannot be proved either true or false using standard mathematical language. Recently a team of mathematicians proved that “Learnability can be Undecidable” where “learnability” is defined as the ability to make predictions about a large data set by sampling a small number of data points. At the end it comes to a question whether an algorithm can generalize its knowledge. For instance, given the answer to a ‘yes or no’ question “Does this image show a cat?” for a limited number of objects, an algorithm should decide the answer for new objects. Link

While investigating the connection between learnability and ‘compression’, which involves finding a way to summarize the salient features of a large set of data in a smaller set of data, the authors discovered that the information’s ability to be compressed efficiently is linked to a paradox known as the continuum hypothesis. In particular, it relates to the different sizes of data sets containing infinitely many objects. The latest result appeared to be important for the theory of machine learning, although it is not clear if it will have much impact on the practice. Read more

This entry was posted in Discoveries, Machine Learning. Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s