Weekly 3x3: Labour market jitters. Altman on spending commits. AI benchmarks under scrutiny.
My top reads in markets, tech, and AI research this week.
MARKET MOVES
Global equities pull back as tech and AI valuations wobble | Nov 7 2025
Major indices across the U.S. and Europe slipped this week amid rising concern about stretched tech valuations and softness in the labour market. The Nasdaq Composite logged its worst week since April. CNBC
Bank of England holds rate at 4% and signals caution amid inflation peaking | Nov 6 2025
The BoE left rates unchanged and stressed that inflation has likely peaked — but emphasised the need for further data before cutting. The narrow 5-4 decision underlines policy internal splits. FT
Labour market jitters: U.S. layoffs surge, official data thin | Nov 2025
With the U.S. government shutdown impacting official economic releases, alternate data show layoffs hitting multi-year highs and job creation faltering. Markets are interpreting this as a possible early warning for growth risk. Bloomberg
Tech Talk
Sam Altman shuts down question on OpenAI spending commitments | Nov 3, 2025
On the BG2 Pod hosted by Brad Gerstner, Sam Altman — joined by Satya Nadella (CEO of Microsoft) — faced pointed questions about the business model and spending commitments of OpenAI. Gertstner’s key question: “How can a company with ~$13 billion in revenue take on ~$1.4 trillion in spending commitments?”. Altman responded sharply: he claimed OpenAI is doing “well more” than the $13 billion figure, and when challenged on the spending-to-revenue ratio he said to Gerstner: “If you want to sell your shares, I’ll find you a buyer.” He also said the company expects steep revenue growth and listed growth levers including ChatGPT, consumer devices, and automation of science. Business Insider
Microsoft's AI build-out in the spotlight | Oct 1, 2025
In a recent podcast interview, Scott Guthrie — Microsoft’s EVP for Cloud & AI — discussed the company’s massive AI infrastructure investments and the risks they entail. He admitted that although the organisation is pouring in “hundreds of billions” into data centres and model training, the timeline for profit generation remains fluid.He also acknowledged investor concerns: “We’re building capacity ahead of demand,” he said, referencing the risk of “over-provisioning” in a cycle where AI monetisation isn’t yet fully visible. Podcast
Apple taps Google's AI‑engine to power Siri overhaul | Nov 2025
Apple is reportedly preparing to pay about US$1 billion a year to license Google’s 1.2‑trillion‑parameter Gemini model in order to give its long‑delayed Siri voice assistant an AI boost. The deal signals a major strategic shift: rather than go it alone, Apple is outsourcing a core AI capability to a rival while it continues building its own models. The market sees this as both a vote of confidence in Google’s infrastructure and a tacit admission that Apple’s in‑house AI is still playing catch‑up. Analysts are watching whether this temporary fix becomes a long‑term crutch. The Verge
Research Radar
Enterprise AI benchmarks under scrutiny — risk of misleading results | Nov 4 2025
A new academic review analysed 445 LLM benchmarks and found that most suffer from weak “construct validity” — i.e., they don’t reliably measure what they claim to. The review found that only 16 percent of the 445 benchmarks used uncertainty estimates or statistical tests to compare model results. That undermines enterprise decisions relying on benchmark scores to guide model purchases/deployments. AI News
AgentFold: Long‑Horizon Web Agents with Proactive Context Management | Rui Ye et al., Oct 2025
This paper addresses the bottleneck for web-based LLM agents doing long-horizon tasks — namely context saturation (when they accumulate too much log/history) vs. over-summarisation (losing important details). It introduces a “folding” paradigm where the agent actively manages its memory: doing granular condensation when needed and deep consolidation of sub-tasks when appropriate. The results are impressive: the AgentFold-30B model outperforms much larger open-source models (e.g., DeepSeek-V3.1-671B) and surpasses many proprietary commercial agents. arXiv
New moderation rules at arXiv reflect a tipping point in academic-AI publishing | Oct 31, 2025
The preprint repository arXiv has announced that it will no longer accept computer-science category “review” or “position” papers unless they have already passed peer review at a formal journal or conference. The policy shift is explicitly a response to an influx of low-effort, AI-assisted submissions that moderators describe as “little more than annotated bibliographies.” arXiv