Publication:
Tech Monitor
post date:
September 15, 2025
While big tech firms continue to pour billions into developing advanced models like ChatGPT and Gemini Ultra, some experts argue that the core functionality of large language models (LLMs) is already showing signs of commoditisation. At Appian’s user conference, CTO Mike Beckley noted that the key LLMs offer only minimal differences in performance, with costs of use falling dramatically.For organisations, however, the real challenge is not just which model to choose, but how to apply them safely and effectively. As Silvia Lehnis, Consulting Director for AI and Data at UBDS Digital, explains:“It’s much easier for developers but also for managers to choose something that’s well known. Not least as it’s more defensible to the board if something goes wrong.”Lehnis also highlights that mainstream models often disappoint businesses in practice, particularly when used for specialised tasks. One solution, she says, is building industry- or company-specific models:“If you’re taking a generic model… you’re still getting a lot of the generic content coming through and paying for the cost of the compute of that generic model, which doesn’t really answer that very specialised question.”This raises critical questions for enterprises about trust, defensibility, and the strategic use of AI—issues that go well beyond the hype of the latest chatbot.