Weekly 3x3: Chips act is dead. Vance on AI opportunity. Hardware-aligned models.

My top reads in markets, tech, and AI research this week.


MARKET MOVES

Fed says stagflation can't be ruled out | February 20th 2025

The U.S. economy is facing the risk of so-called stagflation, when the labor market softens as inflation heats up, St. Louis Federal Reserve Bank President Alberto Musalem said on Thursday. Market Watch

The Chips Act may be dead. Another hurdle for semiconductor stocks | February 20th 2025

Moves expected by the Trump administration to gut a key government scientific agency could bring another challenge for semiconductor and chip-equipment companies — and their stocks. Market Watch

Tech's need for US manufacturing | February 20th 2025

Alger says tech companies need to "start the engine" again on domestic manufacturing in the US. Bloomberg


TECH TALK

JD Vance warns of overregulation | February 11th 2025

US Vice President JD Vance delivered a keynote speech during the final day of the Paris AI Summit where he warned global leaders and tech CEOs that excessive regulation would kill the rapidly growing AI industry. YouTube

Open AI designing chips in-house | February 10th 2025

OpenAI is designing its first in-house AI chip. The chip design is expected to be finalized in the coming months, with TSMC handling manufacturing. Reuters

Salesforce launches AI energy usage benchmark | February 10th 2025

To address AI's environmental impact, Salesforce, partnering with Hugging Face, Cohere, and Carnegie Mellon University, has launched the AI Energy Score. The tool aims to standardize energy efficiency reporting for AI models, providing a consistent way to measure and compare their footprint. AI Magazine


RESEARCH RADAR

Native sparse attention: Hardware-aligned and natively trainable sparse attention | February 16th 2025

NSA, a natively trainable sparse attention mechanism, achieves efficient long-context modeling by combining a dynamic hierarchical sparse strategy with hardware optimizations, delivering substantial speedups and comparable or superior performance to full attention models. ArXiv

How do LLMs acquire new knowledge? A knowledge circuits perspective on continual pre-training | February 16th 2025

By studying how LLMs build their internal "knowledge networks" while learning, researchers discovered that new information is easier to grasp if it connects to existing knowledge, that this learning process happens in distinct phases (building then refining the network), and that LLMs learn from the ground up, which can help us design more effective training strategies. ArXiv

CRANE: Reasoning with constrained LLM generation | February 13th 2025

Constraining LLM outputs for syntactic and semantic correctness can hurt their reasoning ability, but CRANE, a reasoning-augmented constrained decoding algorithm, addresses this by carefully expanding the output grammar, preserving reasoning capabilities while ensuring correct outputs, and achieving significant performance gains on challenging symbolic reasoning tasks. ArXiv

Subscribe to JAMES CORCORAN

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe