Google clinches milestone gold at global math competition
Digest more
OpenAI and Google's AI models achieved impressive results in a difficult math competition, but disputed how the other got their score.
Researchers are urging developers to prioritize research into “chain-of-thought” processes, which provide a window into how AI systems make decisions.
The most powerful artificial intelligence company in the world just admitted it needs help from one of its biggest rivals to stay afloat.
OpenAI has included Alphabet's Google Cloud among its suppliers to meet escalating demands for computing capacity, according to an updated list published on the ChatGPT maker's website.
OpenAI has a $700 billion-plus total addressable market ahead of it, JPMorgan says as it takes the rare step of covering a private company.
In a $2.4 billion deal, Google recruited the chief executive and a co-founder of Windsurf, which OpenAI had been in talks to buy, as the battle to dominate artificial intelligence escalates.
As of Friday, Windsurf’s head of business, Jeff Wang, will take over as the startup’s interim CEO, he announced in a post on social media. Most of Windsurf’s 250 person team is not headed to Google DeepMind and will continue offering its AI coding tools for enterprise customers.
OpenAI will reportedly release its own web browser, putting it in position to challenge Google’s long dominance in web search.
OpenAI has its work cut out – Google Chrome, which is used by more than 3 billion people, currently holds more than two-thirds of the worldwide browser market, according to web analytics firm StatCounter. Apple’s second-place Safari lags far behind with a 16 percent share. Last month, OpenAI said it had 3 million paying business users for ChatGPT.
OpenAI’s deal to buy Windsurf is off, and Google will instead hire Windsurf CEO Varun Mohan, cofounder Douglas Chen, and some of Windsurf’s R&D employees and bring them onto the Google DeepMind team, Google and Windsurf announced Friday.
Chain-of-thought monitorability could improve generative AI safety by assessing how models come to their conclusions and spotting the "intent to misbehave.”