In a previous life I started out as a software developer, then got into software design and architecture, some business analysis, some project management and then became a software executive. Throughout this career in software development it became very obvious to me that developing good software was all about (i) keeping the users/product managers/customers within the bounds of realism and budget (ii) constantly working to remove mediocre and incompetent managers, analysts and developers from the development team (iii) selecting staff who plan and think as much as code (iv) buffering the software developers so that they can maintain productive “flow”. In that order.
Productivity between individual developers and teams of developers varies colossally, with highly productive teams able to out produce mediocre teams ten times their size; wth higher levels of quality. It’s well known in the industry that the average software manager is mediocre at best, and that many of the leaders of large IT organizations and corporations have little real knowledge of software development. It also takes a lot of continuing focus and effort, and sometimes uncomfortable conversations with unreasonable users/product managers/customers, as well as some uncomfortable discussions with staff, to maintain good teams. And with Chief Information Officer revolving doors and executive politics many years of such work can be destroyed quite quickly.
So much better for the upcoming IT executive to “reduce costs” through offshoring to India or some other country, where all the extra “soft” costs are not evident while all the “hard” savings are very much immediately so. Then move on before the soft cost shit hits the fan. Or just buy lots and lots of packaged applications software, turning you into a software contract executive rather than a software development executive. Even better, buy applications running in the “cloud” so that you don’t even have to have your own computing centre. It may later turn out that the software product (and cloud) provider you contracted with is inflexible and may even be more expensive than the in house code (and the in house data centre), but by that time you should have moved onwards or even upwards. Some corporate software product corporations are so egregious in their blatant profiteering that they are referred to by such lovable names as “tough shit you signed”. In one case, my team developed a whole in house database that mirrored that of the software provider just to escape the egregious data access fees. In another, we bought the source code just so we could fix problems that the vendor seemed unable to fix for years and then fixed everything within a few months! That’s pretty much what happens within corporations that use a lot of software, but their business isn’t selling software.
For companies selling software or software-intensive services, which includes banks in the form of payments and treasury management systems, there is the need to constantly provide new services/client manipulations to drive business revenue growth. In software-only companies, there is the need to come up with new product growth stories to keep those high price earnings ratios to boost the value of the stock options and stock grants of the executives and others deemed to be “important” or in lieu of pay for crazy work hours. For a couple of decades “the internet” and “smart phones” offered huge growth opportunities, especially to those that could utilize the utter lack of anti-trust enforcement in the West to monopolize chunks of the application space and make rentier-style profits. Such as office productivity tools and desktop operating systems (Microsoft), web browser, search and online video sharing (Alphabet), social groupings and messaging (Meta), smart phones and related applications stores (Alphabet and Apple), Cloud Computing (Amazon, Alphabet, Microsoft) and streaming (Netflix). And the latest disastrous (for the general population, not the company) September 2nd Alphabet/Google anti-trust ruling just underlines how craven the state and the judiciary are to corporate power. As Brian Merchant noted:
After the ruling, Wall Street, Google, and Apple rejoiced. Google shares skyrocketed, ultimately rising 9%, adding $230 billion in value, and reaching a historic high for the company. This was a best case scenario for Google and Big Tech, which now has a very handy precedent. Mehta declared Google a monopoly in 2024 and then decided that it could effectively continue to operate as one in 2025. As antitrust writer Matt Stoller put it, “this decision isn’t just bad, it’s virtually a statement that crime pays.”
To underline this, only two days later President Trump held a “tech dinner” with 33 top Silicon Valley players (after an earlier in the day AI Summit hosted by the obvious AI expert the First Lady!), including one of the founders of Google and the guy below.
But internet and smart phones, together with customer gouging (and the harvesting of colossal amounts of personal information for advertising uses) is meeting its limits and a new growth story is required to maintain the myth of never-ending exponential profit growth! With the need for such stories, the leaders of technology companies have become more spinmeisters than competent informed managers of massive software companies. And to such people, “Artificial Intelligence” (AI) is the next growth story to keep those P/E values in the sky. And that AI also provides all those software executives in non-software product companies with a new shiny object to wave in front of the business executives to hide the lack of any real improvements being made. And all the business magazines that those busy executives read in their limousines and personal jets will have articles about the wonders of AI. Better for a software executive to already be doing AI before the business executives ask about it, or get prompted to ask about it by all the hungry latest-fad-pushing consultants at McKinsey etc.
In a world of rentier-level profit margins in the West, AI has to be provided in a monopolistic fashion otherwise it will only be a story of low profit growth that may also cannibalize the high profit areas. So we get the colossus-scale AI model, run in massive data centres and protected by legions and legions of copyright lawyers. But where can be found a set of pockets deep enough to fund these massive data centres until all the highly profitable “growth” AI products proliferate?
Software development is where its at, with Cursor being the startup poster child; provided by Anysphere Inc., a private company founded in 2022 and currently valued at US$10 billion after a US$900 million funding round in June 2025. Since being founded, the company has raised US2 billion. It seems to be burning money as fast as it can raise it. The Cursor tool uses AI services from Anthropic, which was founded by former members of OpenAI in 2021 and by the end of 2022 had received US$580 million of funding. In September 2023, Amazon announced an investment of up to US$4 billion (with Amazon Web Services becoming the prime provider of data centre services to Anthropic) and a month later Google followed with US$2 billion. In November 2024, Amazon increased its investment level to US$8 billion and in March 2025 Anthropic raised another US$3.5 billion; with an inferred valuation of US$61.5 billion. Then only 6 months later, the company raised US$13 billion with an inferred valuation of US$183 billion. Anysphere also seems to burning cash as fast as it can raise it, losing US$5.3 billion in 2024 and somehow magically believes those losses will be reduced in 2025; reality points in the opposite direction, with an actual cash burn rate much higher than that.
In an attempt to stem its losses, Anthropic raised the prices for its AI services, which immediately forced Anysphere to raise its Cursor prices and set usage limits. At the same time, Anthropic is increasingly providing its own software development tools, threatening to cut out the middleman Anysphere. All this, while there is mounting evidence that the AI software developer tools don’t even increase developer productivity, as Mike Judge notes here. The raised prices threaten to severely retard user and usage growth, killing the growth story of the two companies that they rely on to raise more cash to feed their money-burning operations. The poster-child use case for generative AI is in great danger of turning into a bust, which will very publicly feed back into Anysphere, Anthropic and Amazon Web Services; undermining the AI growth story.
The other supposed poster child for the profitable AI use case is OpenAI and its GPT products. The company was founded in 2015 as a non-profit by Sam Altman and Elon Musk. OpenAI transformed to a for profit in 2019 and then released GPT-3 (on which ChatGPT is based) and then GPT-4 in 2023. Predominantly subscription AI services with which users can build business software applications, as well as answering queries, producing documents, translating text, and analyzing images and data. Microsoft is using OpenAI services to support its own AI-enhancements to its software applications. Unfortunately, none of these applications is proving to provide more revenue than it costs to provide. The company lost US$5 billion (financial loss, not the significantly higher cash burn rate), and is forecasting to lose US$8 billion or more in 2025. Ed Zitron considers that OpenAI will burn through US$15 billion on compute costs alone in 2025, and possibly higher than US$20 billion overall.
In October 2024, OpenAI raised US$6.6 billion (Microsoft, Nvidia, Softbank) with an implied valuation of US$157 billion. In April 2025, OpenAI raised another US$40 billion at an implied valuation of US$300 billion; for a total of US$46.6 billion in just a year. Such a colossal cash burn rate may require a new large financing round in early 2026, and that’s even if Softbank can make good on its financial commitments to OpenAI; something that is under significant question. There is also Perplexity, that provides an AI-enhanced search engine that spent 164% of its revenue in 2024 just on services provided by AWS, Anthropic and OpenAI; i.e. for every dollar of revenue it spend US$1.64 just on AI compute services.
OpenAI doesn’t tend to run it own data centres, relying on those built by the likes of Oracle, Microsoft, SoftBank and Coreweave. Oracle is spending such large amounts of money building data centres for OpenAI it is currently cash flow negative, given that it does not generate the huge amounts of cash flow of the likes of Amazon, Microsoft, Google and Meta. It is already cutting costs in other areas and transforming cash bonus into stock grants to conserve cash. The dependency of the success of these massive investments just on OpenAI creates a very major risk for Oracle; with the possibility of OpenAI running out of money being a nightmare scenario for Oracle. Without the OpenAI growth story, Oracle is just a low growth corporate software company that will have to write-off a chunk of data centre investments. With Zuckerberg’s forecast that Meta will spend US$600 billion on data centres through 2028, even the cash-generating-machine Meta may get a little strapped.
Coreweave was founded in 2017 as a crypto currency mining company by three commodities traders, but changed to providing cloud computing after the 2018 crypto crash. In April 2023, Nvidia invested US$100 million in the company (which was a significant purchaser of its chips) valuing the company at US$2 billion. In May 2024, the company raised US$1.1 billion at a valuation of US$19 billion. In March 2025, OpenAI signed a US$12 billion AI infrastructure contract with Coreweave and took a US$350 million stake. Just after that, Coreweave went public with an IPO that raised US$1.5 billion, and after reaching a valuation of around US$90 billion in June has fallen back to US$43 billion recently. In its latest financial quarter Coreweave lost US$290 million on US$1.2 billion of revenue, the previous quarter US$314 million on US$981 million. And in October its costs will jump as it has to start making loan payments. If OpenAI starts running out of money, Coreweave may quite rapidly start experiencing financial difficulties. And perhaps even before then.
As Ed Zitron so bluntly puts it:
Large Language Models are too expensive, to the point that anybody funding an "AI startup" is effectively sending that money to Anthropic or OpenAI, who then immediately send that money to Amazon, Google or Microsoft, who are yet to show that they make any profit on selling it.
Please don't waste your breath saying "costs will come down." They haven't been, and they're not going to.
…
As I'll discuss, I don't believe it's possible for these companies to make a profit even with usage-based pricing, because the outcomes that are required to make things like coding LLMs useful require a lot more compute than is feasible for an individual or business to pay for.
In parallel, there have been reports of the vast majority of AI business projects failing to produce beneficial outcomes, directly undercutting the whole “AI will drive a new level of efficiency” growth story. As Gary Marcus notes Cenus Bureau data points to a decline in AI adoption rates for larger US firms. On top of that the massive investments of Alphabet, Meta, Microsoft, Amazon and Apple have not produced any profitable use cases; rather they have annoyed users and shown some of the shortcomings such as the readiness of AI tools to lie and hallucinate. It is the massive investments of these companies and the startups that are driving the sales growth of Nvidia. Its is the growth stories and oversized p/e multiples of Alphabet (p/e 25), Meta (p/e 27), Microsoft (p/e 37), Amazon (p/e 36), Apple (p/e 36), and Nvidia (p/e 49) that are responsible for the rise in the US broader market; with little or no growth outside of them. These companies are now so massive that they have to bet bigger and bigger on the next growth story to maintain their stock market valuations.
Sales growth is already decelerating at Alphabet (13.8% Q2 vs Q2 2024), Meta (21.6%), Microsoft (18%), Amazon (13.33%), Apple (9.63%), and Nvidia (55.6%, but only 6% Q2 vs Q1); and that is after extremely large increases in margins in the past few years. With US Treasury 10-year interest rates above 4% there is also much less support for high p/e ratios than when that rate was at 2% and below. With the renaissance of Huawei in the smart phone market, Apple and Android (Alphabet) can probably count China and possibly parts of Asia (25% of Apple revenue), but the lack of availability of Google services (US sanctions) very much limits Huawei in Europe. In contrast, the Chinese Honor (3%) that runs on Android, and Xaiomi (19%) that uses the Android kernel, have a combined 22% of the declining European market. Nvidia may also have lost the Chinese market, as US sanctions, Chinese technological advances and Chinese Party-state industrial development and security concerns push a Chinese move toward domestic suppliers.
The growth stories of these “big 6” stocks are already under a significant amount of pressure, and any missteps that undermine the AI growth story will cause investors to question the future growth rates of these companies; with a possibly severe reduction in the p/e ratios and therefore valuations. In the real economy, all of the growth in industrial construction has been due to the frenetic pace of new AI data centre construction. If it turns out that there is no profit to be earned from those data centres, then construction could rapidly cease. Taking away both that source of US GDP growth and a chunk of Nvidia sales, which may deliver a double hit of a falling p/e ratio and falling earnings to Nvidia. The ability of just OpenAI and Anthropic to be able to keep funding colossal loss-making investments in AI services (with compute resources provided by Amazon, Oracle, Microsoft and Coreweave etc.), with no prospect of actually making a profit, is a very major risk factor in Nvidia’s future revenue projections.
With the probable impact to the US stock market being large, and the richest 10% of US families responsible for 50% of consumption, an AI bust may be the trigger for interest rate cuts and QE-driven US government debt yield curve management in an attempt to keep the bubble going. Especially given the level of US government deficits (7% of GDP) and debt (above 100% of GDP), with a recession serving to greatly expand the budget deficit.
In contrast to the US, the Chinese Party-state very much implements its anti-trust and other policies to remove monopoly and oligopoly profiteering. China also has very strict laws against corporations gathering data on individuals, which greatly restricts the personally directed advertising that is such a source of revenue for the likes of Alphabet, Amazon and Meta. The result is that Chinese companies’ profit margins are about a half or less than those in the US. This from Fortune magazine.
If measured by average profit per company, the profitability gap between U.S. and Chinese firms on this year’s list is the widest since 1999 when the Global 500 included only eight companies from Greater China. Chinese companies on the list score even lower by “productivity,” which divides profits by the number of people they employ. Chinese companies earned $30,800 per employee last year, only a fifth of the $154,400 per employee earned by U.S. firms.
US companies exploited the massive QE and inflation of the post-COVID years to significantly expand their profit margins, something that the Western media will not cover. The definition of “productivity” above says so much about the West, as it is profits/employee not output per employee hour. When such things as cars in China are half the price or less than what they are in the US, and making money off financial rentiership and harvesting individuals personal information is severely restricted, of course profits will be lower. Then add in the utterly corrupt and profiteering US sick care and war making industries! Fortune magazine feigns a lack of knowledge for the differences in profit margins, “The reasons for this year’s wider profit gap between the China and the U.S. aren’t obvious”, when the reality is that it would just be very inconvenient for the magazine to mention the obvious reasons.
In such an environment, building a corporate monopolized version of AI would not be acceptable in China. Aided also by the US sanctions on high-end chip sales to China, the country is following a different approach of models that need less processing power and that will be provided via open source or at very low costs. Such an approach allows for a rapid dissemination of AI, driving the development of AI-derived tools and commercial applications that can be profitable. For example, within car ADAS and factory robots as well as a proliferating number of consumer products and services. This low cost driven rapid dissemination allows for a much faster and greater level of development and testing of possible uses, facilitating the identification of profitable applications; providing an additional productivity and innovation advantage. This is what is currently happening across China. At the same time, the AI model makers and the computer hardware manufacturers are working together to overcome the Western lead in AI hardware; which is less of an issue with less compute intensive models and really cheap and abundant electricity.
We should not be surprised to see increasing restrictions placed on the use of Chinese AI models, such as Deepseek, in the West in an attempt to shore up the US AI growth story. As we saw with Huawei with respect to smart phones, the US state will very readily collude with the US big tech companies to protect US profits under the cover of “national security”. Such things may support the US AI bubble for a little while longer, but as with EVs, solar panels, wind turbines, nuclear power stations etc., it will just lead to the US and the West falling behind. And those IT executives, having followed the industry trend, will be able to hold up their hands and say “who would have known” or will have already moved on. So much less risky than actually working to increase software productivity in a long-term sustainable fashion.
I realized by 1982 that what talented people could do with system development exceeded methodologies. AI is mechanized methodologies.
I started work at AT&T Bell Labs in 1978 when AT&T had a million employees, the largest company in the world, and Bell Labs was the best Research & Development lab in the world Sometime in the 80's I added a subheading about Bell Labs: "for a regulated monopoly."
I came from academia and convinced our hardware/software development project to hire David Parnas as a consultant. He was one of the best software engineer researchers in the world and his work underlies what today is called object oriented programming. We implemented his document driven method and wrote a paper published in the Bell Labs Technical journal titled "Using Documentation As A Software Design Medium". Our initial project produced sample documents and later they gave me the job to do a technology transfer of the method throughout the company. The implementations went very well. In 1982 I was offered the job as a manager of software engineering at National Bureau of Standards, NBS, now called NIST.
I transferred the method to 40 projects which from ranged from small to huge, from handheld devices to reimplementation of large systems and hardware development. Key people that championed the methods read the article above, looked at sample documents from other projects and were off and running. The clients on the projects were MS and Ph.D. developers. Two main areas of early adopters were old timers who noted that the method was just systematic reading, writing, and dialogue, and the other group were Ph.D.'s in physics who realized the need for an underlying model which was provided by the document framework.
I was a true believer in the power of design and human practice. Like the amazing projects described by Roger Boyd in this article, there were individuals and teams who did incredible work, who in my case, used the design by documentation method leading to the set of documents using the same names and often using the same symbolic abbreviations. In hindsight, I was naïve and thought that all I had to do was to interview these brilliant people to capture the essence of their work. So I began extensive interviews, listening as hard a possible, and recording their experiences. After a while I realized that their stories did not converge. These people were doing something beyond the methodology and recording what they did using the document framework. I thought that through interviews, with the starting point of a common document framework, I could tap into collective experience, The stories didn't converge. I had been given the opportunity to understand the creativity of design but in fact I realized that the people were doing what was far more important than the methods.
For years, I took it as a personal failure that I had been on the forefront of software engineering working with talented people but I was unable to capture the essence of what made the difference: people, project, new hardware, new challenge, looking at old problems in a unique way, etc. After a few years I realized that there are 26 letters in English, and a few other symbols, but that does not explain what a novelist does to create a story. I can now accept why my attempt did not succeed and how I was blinded by the excitement of project design and implementation.
I turned down the job offer at NBS because I knew that I did not understand software.
So, here we are in 2025 and this article by Roger Boyd recounts his experience in software and predicts that the AI fad will not deliver the huge returns because it is extracting resources from past successes with little to additional benefit. I agree.
Back in the stone age AT&T had labs to test both their equipment and vendor equipment sw updates and generic upgrades. This enabled the company to reach their 5 nine's goals (99.999) product availability. the lab I worked in tested signaling (as opposed to voice) data. Never had issues when the work was in the network.
we were appalled when we got a new boss who proclaimed how much $ could be saved by "offshoring the lab" to Texas. "Why test it twice" despite being told how often we caught issues because our testing was far more thorough. even tho he looked like Millhouse from the Simpsons the idea of cutting spending impressed the higher ups more than providing first rate service.