
Ever heard of the Netflix problem? No, I’m not referring to the issue of rising subscription cost or Netflix’s decreasing library of licensed content from other studios (though these are annoying too for its subscribers). I’m referring to something much bigger, specifically the “Netflix Problem” in computer science. In essence, the problem is how Netflix can optimally recommend viewers with more personalized content based on an inference of their past viewing preferences. Sounds familiar? This problem arises too with Amazon, Spotify and many other content providers. As such it is a real world problem requiring the service of science, specifically clever mathematics and machine learning. This is because the problem involves handling massive and sparse datasets (sparse because most users only rate a tiny fraction of available films). Optimal solutions using optimal sampling of massive, sparse data are the key to deliver accurate recommendations.
The Quantum Solution
The Netflix, or more generally, the recommendations problem lends itself well to fast computing methods like quantum computers. Quantum computers are fast because they are not constrained by the 0–1-bit operations that underlie classical computing. Instead, they replace bits with quantum bits or qbits. A qubit can exist in a superposition, meaning it can represent 0, 1, or a combination of both simultaneously. This property, along with entanglement, allows quantum computers to process massive amounts of data in parallel, solving very complicated problems beyond the reach of the most powerful classical computers.
In 2016, two quantum researchers, Iordanis Kerenidis and Anupam Prakash published an algorithm that could sample data matrices exponentially faster than any classical computer. Think of a huge 2 x 2 matrix where users are listed on the vertical axis and movies on the horizontal axis. Instead of filling out the whole matrix to identify the best movie to recommend, the Kerenidis-Prakash algorithm uses quantum sampling to randomly sample the existing data to generate “good enough” recommendations, thus speeding up the process exponentially. It is an impressive piece of work, except they didn’t prove that no classical algorithm can do better.
Enter a 16-year old undergraduate at the University of Texas at Austin by the name of Ewin Tang. Tang was taking a quantum information class with the renowned computer scientist, Scott Aaronson. For her senior year thesis, Aaronson suggested that Tang write a proof that no fast classical algorithm exists for the recommendation problem. In other words, prove that quantum algorithms are uniquely superior to any classical algorithm for the recommendation problem.
Dequantizing the Quantum Advantage
Tang got to work on her assignment in the fall of 2017. For several months, she struggled on her thesis but soldiered on day and night. As time went on, her results seemed to indicate that maybe there does exist a classical algorithm that is as good as the quantum solution. She wrote up her paper and presented her findings to a group of mathematicians and computer scientists at the Simons Institute for the Theory of Computing at the University of California, Berkeley. It was 2018. She was just 18, standing in front of some of the most brilliant minds in their fields. They had come to a seminar to hear a prodigy present a “dequantizing algorithm.” They did not give her an easy time even though Tang was young.

Tang’s presentation spread over four hours. Sharp questions flew thick and fast. Tang answered most of them deftly and with the confidence that belies her youth. By the end of the marathon presentation, there was complete silence; her audience was convinced that Tang was right. She had proved that if both classical and quantum computers had similar ways to access data, the “quantum advantage” disappeared. It was a stunning achievement. For science it meant that though real, the quantum advantage may have been overhyped. Her work has sparked renewed interest in less memory-hungry classical algorithms, in the hope that they may prove useful in other applications beside the recommendations problem.
Tang’s ultimate validation arrived last year when Tang was awarded the 2025 Maryam Mirzakhani New Frontiers Prize awarded by the Breakthrough Organization for “developing classical analogs of quantum algorithms for machine learning and linear algebra, and for advances in quantum machine learning on quantum data.” Tang, now 26, is serving as a Miller Postdoctoral Fellow at the University of California, Berkeley. In a few months’ time, she will join the faculty of Princeton University computer science department to research on problems sitting at the boundary of physics and computation. May the force be with her!