Introduction
Originally, I planned to write and publish the third part of my general trilogy on generative AI only after presenting the second part at the upcoming scientific conference in November 2025. However, a recent escalation of economic commentary on AI has prompted me to shift course. Building directly on part one (which explores how generative AIs describe themselves - latest version available here), you'll find part three below, focusing on the business dimensions of AI. A link to part two on the impact of GAIs on humans - that is awaiting presentation at the Understanding and Addressing Digital Inequalities conference in Florence - will follow soon.
The current AI imbalance
The major turning point for mass AI adoption is not a change in AI technology but user alignment[1] — that is, rooted in a change in how AI systems interact with users. In the words of OpenAI’s CEO Sam Altman: “I think one of the surprising things is, if you do a little bit of fine tuning to get [the model] to be helpful in a particular way, and figure out the right interaction paradigm, then you can get this. It's not actually fundamentally new technology that made this have a moment. It was these other things.”[2]
Funding needed
The turn to mass adoption followed the realization that “the most advanced AI would continuously use more and more compute and that scaling large language models was a promising path to AGI [artificial general intelligence – OHS] rooted in an understanding of humanity. We would need far more compute, and therefore far more capital, than we could obtain with donations in order to pursue our mission.”[3] And thus, OpenAI changed from an organization relying on ideas while having “no products, no business, and no commercial revenue”[4] to a partial for-profit organization aiming to become an “enduring company”[5].
State of tech
The turn to mass adoption does not imply that technology has reached the level required for sustainable or truly useful mass use. As Altman admits: “I don't think we're super close to an AGI.”[6] The current architecture underlying ChatGPT and other generative AIs (GAIs) remains inherently flawed: it is indifferent to factuality and truthfulness and lacks cognitive depth beyond its training data (Hansen-Staszyński, 2025). Moreover, there is no realistic pathway to overcome these flaws (Coveney & Succi, 2025; Zhao et al., 2025). These limitations make current GAIs best suited for niche users who maintain a critical distance — adopting a zero-trust approach to AI outputs and recognizing that GAIs are inanimate tools without agency. For other users, GAI offers short-term gains — a superficially polished output produced with little effort — that often prove unfit for long-term use (e.g., “workslop”; Niederhoffer et al., 2025) and may even be harmful (Fang et al., 2025; Liu et al., 2025; Hansen-Staszyński, 2025).
Notwithstanding the flaws, AI technology is bundled with a large range of software and hardware product without a user opt-out - to the extent that it has been dubbed 'force-feeding'[7].
The imbalance
The imbalance between the technological state of AI and the adoption rate made possible by alignment and imposed bundling casts serious clouds over the future of the entire AI project. As long as the original promise of creating an AGI – “AI systems that are generally smarter than humans” in the words of OpenAI[8] - remains unfulfilled, the current situation resembles a bubble. And bubbles might burst.
Altman acknowledged regarding AGI at the beginning of 2025: “There is still so much to understand, still so much we don’t know, and it’s still so early. But we know a lot more than we did when we started.”[9] It appears related that AGI is now pushed aside by Altman as something relatively trivial as a goal and got replaced by “superintelligence in the true sense of the word”[10]. Nevertheless, the new concept sounds very much like AGI.[11]
The bubble
Lately an increasing number of analysts question whether GAIs constitute a bubble. The Bank of England[12], The IMF[13], Deutsche Bank[14], JPMorgan[15], and others[16] are warning that the current state of affairs cannot last. The implications of a potential burst of the bubble would be severe for the U.S. economy in particular and the global economy more broadly. According to JPMorgan’s Michael Cimbalest, “AI related stocks have accounted for 75% of S&P 500 returns, 80% of earnings growth and 90% of capital spending growth since ChatGPT launched in November 2022.”[17] George Saravelos, Global Head of FX Research at Deutsche Bank, notes: “It may not be an exaggeration to write that NVIDIA—the key supplier of capital goods for the AI investment cycle—is currently carrying the weight of U.S. economic growth.”[18]
The escape
The most feasible path to prevent the current situation from turning sour is to secure exponential investment in AI. Currently, the investments in AI are becoming increasingly circular. Morgan Stanley analyst Todd Castagno describes the process: “suppliers are funding customers and sharing revenue; there is cross-ownership and rising concentration”[19]. Investors are voicing their concern about this situation and this might lead to a burst of the bubble.
Exponential investment by independent parties would probably avoid this worst-case scenario. But these investments are only justified if AI can deliver explosive returns for customers adopting it. This requires more than alignment-driven mass adoption: users beyond the current limited sectors adopting it – e.g. coding, drug discovery - must discover genuinely transformative applications of existing AI technologies in their work and everyday lives, applications that replace the present, often ineffective, counterproductive, and sometimes risky tendency to use AI as a means of compensating for skill gaps or unmet personal needs.
But even if customer return would somehow materialize, the AI sector is not out of the woods. The recent Bain’s sixth annual Global Technology Report argues that if AI adoption and compute demand grow as projected to 2030, the world (or at least major companies) will need to generate a lot of additional revenue (or cost savings) just to pay for the infrastructure (data centers, power, hardware) to support that growth. The report shows that by 2030 “even with AI-related savings, the world is still $800 billion short to keep pace with demand”[20].
Not all agree with this bleak picture. But even if the AI sector finds an economic way out of its seemingly untenable position, it seems unlikely that it will voluntarily starts softening the lure of its current alignment or will protect its users from harm resulting from their mere use of GAIs. One way or another mass adoption remains key for its successful existence.
Literature
· Chu, M. et al (2025). Illusions of Intimacy: Emotional Attachment and Emerging Psychological Risks in Human-AI Relationships. ArXiv. https://arxiv.org/pdf/2505.11649
· Coveney, P. & Succi, S. (2025). The wall confronting large language models. ArXiv. https://arxiv.org/pdf/2507.19703v2
· Fang, C. et al (2025). How AI and human behaviors shape psychological effects of chatbot use: a longitudinal randomized controlled study. ArXiv. https://arxiv.org/pdf/2503.17473
· Hansen-Staszyński, O. (2025). Generative AI-triggered digital inequality: Pathways, mechanisms, and proposed interventions. Understanding and addressing digital inequalities conference.
· Liu, A. et al (2024). Chatbot Companionship: A Mixed-Methods Study of Companion Chatbot Usage Patterns and Their Relationship to Loneliness in Active Users. ArXiv. https://doi.org/10.48550/arXiv.2410.21596
· Niederhoffer et at. (2025). AI -Generated “Workslop” Is Destroying Productivity. Harvard Business Review. https://hbr.org/2025/09/ai-generated-workslop-is-destroying-productivity
· Zhao, C. et al. (2025). Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens. ArXiv. https://arxiv.org/abs/2508.01191
Footnotes
[1] “GAIs employ alignment mechanisms “to generate responses that are emotionally attuned and feel strikingly real” (Chu et al., 2025). By mirroring users’ emotions, these mechanisms allow GAIs to fine-tune their interactions to maximize agreeableness and empathy. In doing so, they foster bonds that resemble human-to-human connections, as their responses replicate core processes of social bonding.” (Hansen-Staszyński, 2025)
[2] Konrad & Kai (2025). Exclusive Interview: OpenAI’s Sam Altman Talks ChatGPT And How Artificial General Intelligence Can ‘Break Capitalism’. Forbes. https://www.forbes.com/sites/alexkonrad/2023/02/03/exclusive-openai-sam-altman-chatgpt-agi-google-search/?sh=6088ccbf6a63 Accessed: 8.10.2025
[3] OpenAI (2024). Why OpenAI’s structure must evolve to advance our mission. https://openai.com/index/why-our-structure-must-evolve-to-advance-our-mission/ Accessed: 8.10.2025
[4] Ibid.
[5] Ibid.
[6] Konrad & Kai (2025).
[7] Gioia (2025). The Force-Feeding of AI on an Unwilling Public. The honest broker. https://www.honest-broker.com/p/the-force-feeding-of-ai-on-an-unwilling Accessed: 9.10.2025
[8] OpenAI (2023). https://openai.com/index/planning-for-agi-and-beyond/ Accessed: 8.10.2025
[9] Altman (2025). Reflections. https://blog.samaltman.com/reflections Accessed: 8.10.2025
[10] Ibid.
[11] Weatherbed (2025). OpenAI’s Sam Altman says ‘we know how to build AGI’. The Verge. https://www.theverge.com/2025/1/6/24337106/sam-altman-says-openai-knows-how-to-build-agi-blog-post Accessed: 8.10.2025
[12] MarketMinute (2025). AI Bubble Warning: Bank of England Flags 'Stretched Valuations' as Market Concentration Hits 50-Year High. Financial Content. https://markets.financialcontent.com/stocks/article/marketminute-2025-10-8-ai-bubble-warning-bank-of-england-flags-stretched-valuations-as-market-concentration-hits-50-year-high Accessed: 9.10.2025
[13] Stewart (2025). IMF chief warns ‘uncertainty is the new normal’ in global economy. The Guardian. https://www.theguardian.com/business/2025/oct/08/imf-chief-warns-uncertainty-is-the-new-normal-in-global-economy Accessed: 9.10.2025
[14] Edwards (2025). The AI boom is unsustainable unless tech spending goes ‘parabolic,’ Deutsche Bank warns: ‘This is highly unlikely’. Fortune. https://fortune.com/2025/09/23/ai-boom-unsustainable-tech-spending-parabolic-deutsche-bank/ Accessed: 9.10.2025
[15] Big Boss Interview (2025). #3 JPMorgan CEO Jamie Dimon: The AI Bubble Will Burst. BBC. https://www.bbc.com/audio/play/p0m7h23s Accessed: 10.10.2025
[16] Agarwal (2025). Is the AI boom becoming a bubble? Why Goldman, JPMorgan, IMF are sounding the alarm. The Economic Times. https://economictimes.indiatimes.com/markets/stocks/news/is-the-becoming-a-bubble-why-goldman-jpmorgan-imf-are-sounding-the-alarm/articleshow/124443190.cms Accessed: 10.10.2025
[17] Cimbalest (2025). The Blob: the AI and data center takeover. Eye on the market. https://am.jpmorgan.com/content/dam/jpm-am-aem/global/en/insights/eye-on-the-market/the-blob-amv.pdf Accessed: 9.10.2025
[18] Quoted in Edwards (2025).
[19] Seitz (2025). Morgan Stanley Raises Caution Flag On AI Financing Deals. Investor’s Business Daily. https://www.investors.com/news/technology/ai-stocks-morgan-stanley-concerns-about-ai-financing-deals/ Accessed: 9.10.2025
[20] Bain & Company (2025). $2 trillion in new revenue needed to fund AI’s scaling trend - Bain & Company’s 6th annual Global Technology Report. Press release. https://www.bain.com/about/media-center/press-releases/20252/%242-trillion-in-new-revenue-needed-to-fund-ais-scaling-trend---bain--companys-6th-annual-global-technology-report/ Accessed: 10.11.2025

