With a near monopoly on advanced chip manufacturing and packaging, it is no wonder that during the AI boom that the world's largest foundry is not only making money. But it is a wonder that TSMC is not extracting even more revenue and profits from its customers than it does.
The only explanation, even as TSMC is contemplating price hikes to cover rising electricity and other costs in 2025, is that company founder Morris Chang and chief executive officer CC Wei have grasped the secret of life: Know when you are already winning.
In the quarter ended in September, revenues at TSMC rose by 36 percent to $23.5 billion and net income rose by a stunning 51 percent to $10.06 billion, meaning that 42.8 percent of revenues fell to the bottom line. That is a level that is higher than the average of 37.8 percent of revenues between Q1 2018 and Q2 2024, but it is by no means the peak level TSMC has ever hit. That stretch between Q1 2022 and Q2 2023, when 7 nanometer processes were mature and 5 nanometer processes were ramping, and GenAI had just Big Banged was at about the same level. It is only the ramp of 4 nanometer, 3 nanometer, 2 nanometer, and 1.6 nanometer processes in recent years that has put a damper on profits; slumping sales of PCs and smartphones also did not help matters.
But with AI server sales booming, the PC market recovering, and smartphones more or less steady - and all three kinds of devices showing growth year on year and sequentially - TSMC was able to break the $10 billion mark in net earnings for the first time in its history. And perhaps most importantly given that the future of chip making is now a lot more expensive than in the past, that net income allowed TSMC to grow its cash and equivalents pile to $68.51 billion. And through prudence, it is managing to keep its capital expenses for the year at somewhere between $31 billion and $32 billion even as it is growing.
There could not be a more stark contrast with the state of Intel's foundry. As we pointed out back in April, based on statements by both TSMC and Intel and our forecasts, by 2030 TSMC could be at $180 billion in revenues and around $46 billion of that could come from AI training and inference chips. That AI part of the TSMC business will be larger than all of Intel Foundry six years hence. Unless something radical happens, like World War III, an alien invasion, or Earth gets swallowed up by a wandering black hole.
AI server processors are the key driver of TSMC's growth, obviously, which includes Nvidia and AMD GPUs, every AI accelerator we know about (including the Gaudi 2 and Gaudi 3 from Intel), and CPUs equipped with vector and matrix math units that are used to do AI processing. These are included in a product group that TSMC calls "HPC," which does not mean traditional HPC simulation and modeling but rather high performance computing in the broadest sense, like AMD uses the term sometimes, but it also includes switch and adapter ASICs and FPGAs. That AI processor business does not include networking, edge, or on-device AI units. Having said all of that, this is still a somewhat fuzzy definition.
Anyway, on the call with Wall Street analysts going over the numbers, Wei said that the company was now projecting that revenues from "server AI processors" would more than triple in 2024 and when the year was done would account for "mid-teens percentage" of total revenue for the year. The forecast is for revenues in Q4 to be somewhere between $26.1 billion and $26.9 billion for all of TSMC, and at the midpoint, that is $89.7 billion in sales. If 15 percent is a good guess for "mid-teens percentage" then TSMC's AI business will account for around $13.5 billion in 2024, which means 2023 was probably somewhere around $4.3 billion for sales of server AI processors.
This is a little bit different from but consistent with the AI training and inference revenue model we built back in April, which pegged revenues at $1.52 billion for 2022 and at $4.6 billion for 2023. We need a bit more data to build a better model.
With that AI business at TSMC tripling two years in a row, the question now is can it triple again in 2025 and do it again in 2026? It is exceedingly hard to triple revenues for a long time, but admittedly, this may be the one exception.
The good news for TSMC is it is squeezing more money out of each wafer to rolls through its foundry, and that is because each successive process node and packaging improvement costs more because Moore's Law is hard now, not easy, and it gets harder at each node and with each packaging advance.
Given all of this, it is no wonder that some companies are compelled to hang back on process nodes, as Nvidia did with the "Blackwell" GPUs that use the same 4N process from TSMC that the "Hopper" GPUs do. It may be far easier and more profitable to glue together reticle limited dies with C2C interconnects than it is to move to ever smaller transistors and be on the front-end of very costly and risky product ramps.
In the third quarter, sales of HPC chips as defined by TSMC above rose by 62.5 percent to $11.99 billion, and we reckon that server AI processors for training and inference accounted for maybe $3.76 billion, or about 16 percent of overall revenues and about 31.4 percent of HPC category revenues. To make the numbers work out, TSMC is going to have a killer Q4 2024 in this HPC category - maybe something on the order of $5.65 billion. We shall see.
In terms of manufacturing process, 5 nanometer still is the volume revenue generator, at $7.52 billion and up 17.7 percent year on year. But 3 nanometer chip revenues rose by a factor of 4.53X to $4.7 billion and it can't be long before it 3N will surpass 5N. Then again, the use of 7N is still growing, and was up 44.5 percent year on year to just under $4 billion. Which just goes to show you that the economics of chippery is not just a straight line to the most advanced technology.
Other processes - some of them very old indeed - accounted for $7.29 billion in sales, or 31 percent of sales.
This how you could tell that Intel was not really a merchant foundry. It never kept old gear around and it never used it to squeeze money out of a process for those many use cases that are not on the leading node. For Intel Foundry, 14 nanometers is the oldest stuff it can etch in volume. TSMC can go all the way back to 150 nanometers and 180 nanometers, and believe it or not, such old processes accounted for 4 percent of revenues in the third quarter of this year.
Just for reference, 180 nanometer processes were used for Intel Pentium III, Motorola PowerPC, and AMD Athlon processors around the turn of the millennium.
Crazy, right?
One last thing. As we said above, it is probably tempting to mash up reticle-limited chips at a more mature process than to push the transistor limits and also to push the packaging limits. That would seem to imply that, perhaps there might be less of demand for 2 nanometer and 1.6 nanometer processes. Wei said that, oddly enough, this was not the case.
"We have many, many customers who are interested in 2 nanometer," Wei explained on the call. "And today with their activity with TSMC, we actually see more demand than we ever dreamed about it as compared with N3. So we have to prepare more capacity in N2 than in N3. And A16 is very, very attractive for the AI servers chips, and so actually the demand is also very high. So we are working very hard to prepare both 2N and A16 capacity."
We expect to see a mix of strategies, driven by economics as much as feeds and speeds.