This isn't "news" as Google has had this technology for over a year without releasing anything to the public:
Google claims:
Granted, there's more hype than substance in the claim as most of the algorithms Alpha Evolve invented were specializations of existing, human invented algorithms to limited domain problems. Still, one can't wonder if it has been critical to their blazing fast achievements in recent months.
I also wonder if Deep Seek may have run into issues with their latest batch of models. There were rumors that they were going to release the next iteration of their thinking series in May, or even earlier. But there's no such release in sight.
Of course, these are just rumors. But the fact that they released a theorem prover in lieu of a major new model would indicate that they are not satisfied with the current performance of whatever general model that they trained.
But releasing a theorem prover also indicates Deep Seek is investing in foundational tools (much like Google has with Alpha Evolve, Alpha Fold, etc.), which seems to be the key to pushing past the current wall facing most other AI companies outside of Google and Open AI.
Times like these also cause one to hate the fact that China has no equivalent to Google (e.g. a huge, global tech. company built around search and foundational AI research) due to the shameful failure that is Baidu.
Google claims:
To investigate AlphaEvolve’s breadth, we applied the system to over 50 open problems in mathematical analysis, geometry, combinatorics and number theory. The system’s flexibility enabled us to set up most experiments in a matter of hours. In roughly 75% of cases, it rediscovered state-of-the-art solutions, to the best of our knowledge.
And in 20% of cases, AlphaEvolve improved the previously best known solutions, making progress on the corresponding open problems. For example, it advanced the . This geometric challenge has and concerns the maximum number of non-overlapping spheres that touch a common unit sphere. AlphaEvolve discovered a configuration of 593 outer spheres and established a new lower bound in 11 dimensions.
Granted, there's more hype than substance in the claim as most of the algorithms Alpha Evolve invented were specializations of existing, human invented algorithms to limited domain problems. Still, one can't wonder if it has been critical to their blazing fast achievements in recent months.
I also wonder if Deep Seek may have run into issues with their latest batch of models. There were rumors that they were going to release the next iteration of their thinking series in May, or even earlier. But there's no such release in sight.
Of course, these are just rumors. But the fact that they released a theorem prover in lieu of a major new model would indicate that they are not satisfied with the current performance of whatever general model that they trained.
But releasing a theorem prover also indicates Deep Seek is investing in foundational tools (much like Google has with Alpha Evolve, Alpha Fold, etc.), which seems to be the key to pushing past the current wall facing most other AI companies outside of Google and Open AI.
Times like these also cause one to hate the fact that China has no equivalent to Google (e.g. a huge, global tech. company built around search and foundational AI research) due to the shameful failure that is Baidu.
Last edited: