
Discover more from A Few Things....
A Few Things: What's Next In Chip War, Marko Papic on Russia, How To Be A Great Analyst, Engines That Move Markets, Calling BS, Is Bigger Better in AI?, Jensen Huang with Nicolai Tangen, Sam Altman...
June 29, 2023
I am sharing this weekly email with you because I count you in the group of people I learn from and enjoy being around.
You can check out last week’s edition here: Britain AI Super Power?, India's Foreign Policy, What Do Indian Millennials Want?, Epstein's Money Men, Uranium Crash Course, Your Personal AI, AI Basics For Investors....
Quotes I Am Thinking About:
“Two roads diverged in a wood, and I—I took the one less traveled by, And that has made all the difference.”
- Robert Frost
“One of the lessons of history is that nothing is often a good thing to do and always a clever thing to say.”
- Will Durant
“Adversity is a mirror that reveals one’s true self.”
- Chinese Proverb
“In three words I can sum up everything I've learned about life: it goes on.”
- Robert Frost
“You may not be interested in war, but war is interested in you.”
- Leon Trotsky
“Anyone who stops learning is old, whether at twenty or eighty. Anyone who keeps learning stays young. The greatest thing in life is to keep your mind young.”
- Henry Ford
“All courses of action are risky, so prudence is not in avoiding danger (it's impossible), but calculating risk and acting decisively. Make mistakes of ambition and not mistakes of sloth. Develop the strength to do bold things, not the strength to suffer.”
- Niccolo Machiavelli
A. A Few Things Worth Checking Out:
1. We covered Chip War by Chris Miller in May.
Last week, I brought together some of my friends interested in AI and Semi’s with the amazing Chris Miller to discuss his book Chip War and what he's learnt over the last two years.
Here's what he’s learnt since the book:
A. Moore's Law might be ending. Over the last 30 years, we have relied on innovation to drive a doubling of transistor density every 18-24 months. This has led to the price of transistors falling a billion fold, and has led to all the innovation we see today, especially AI.
The last push was extreme ultraviolet lithography. Post Moore's Law, we will need to rely on technologies such as FinFET, improved packing and designing of chips, and designing specialised chips for specific purposes versus the GPU or CPU model of last three decades.
B. Having technological superiority is critical to the US Dept of Defence's Third Offset Strategy = having access to better compute and communication chips is critical for National Security.
Also, since a War today would involve both sides pushing for full spectrum dominance, there is a desire for edge computing and processing since you won't be able to rely on compute happening at a datacenter and then being communicated to the battlefield.
C. Across the semiconductor industry, he sees opportunity in a handful of areas: a) Inference focused chips - that market is wide open. Nvidia's edge using the H100 chipset is on GPU training, but inference and training are fundamentally different activities. b) Improved design and packaging of chip architecture beyond just squeezing more transistors on to the silicon c) Outsourced assembly and testing (OSAT). After assembly, testing of semiconductors is a critical task given the prevalence of chips in every modern device.
We also spent a lot of time discussing how we came to depend so precariously on Taiwan for semiconductors and hence a) what are the risks of a global conflict, b) how might China escalate such a conflict and c) what Chris is seeing and hearing from global corporations and how they are preparing for the worst.
2. What’s next in Russia, as usual my friend and favourite market strategist Marko Papic had smart things to say.
3. Some people are great at developing frameworks and mental models.
Alix Pasquet is one of those guys and his presentations are always amazing. I shared the one he did with Neckar (titled: Why Great Investors Build Networks and Never Stop Learning) a few months back and in April he presented at Project Punch Card 2023 discussing: How To Be a Great Analyst.
If you are an investor of any kind, this is time well spent.
My favourite bit was on the sources of leverage as an individual / investor. Content, Community, Code and Capital.
He also shared a few key bits on twitter, that are easy to consume.
4. The FT had a thoughtful article arguing that having a single retirement age is an example of a regressive policy, titled: Is a healthy retirement only for the rich?.
Key bits:
As the retirement age creeps up in the UK, so does the number of people who will never reach it — something that is felt most harshly in the most deprived parts of the country. Among the most deprived 10 per cent, a quarter of people will die before reaching the planned higher retirement age of 68, while in the least deprived decile the figure is fewer than 1 in 10.
It may be easy to assume that what explains this increase in mortality rates between the wealthiest and most deprived groups is simply a larger number of people making “poor decisions” such as smoking, bad diets or drug use. But the fact that the discrepancy is consistent even in children suggests a more complex relationship. In 2022, 10 times more eight-year-olds died in the most deprived decile compared to the least deprived.
Having a single retirement age becomes, in practice, a regressive policy: disadvantaged groups get less pension overall because of consistently dying younger. But even noticing the disparities between the most and least fortunate is a start. Many countries in the world still don’t have this data. The first step towards adjusting the regressive nature of these policies is to recognise it.
B. Engines That Move Markets:
AI is the next BIG thing. The next Industrial Revolution….I might even believe that. But how do we make money from investing in this technology?
Alasdair (Sandy) Nairn is one of the founders of Edinburgh Partners. Prior to this, he was the chief investment officer of Scottish Widows investment Partnership. He has won multiple performance awards for the management of global equity portfolios over his 37-year investment career.
In 2001, he published the first edition of Engines That Move Markets, with the updated second edition being published in 2018.
Philippe Laffont, the founder of Coatue called it: “One of the best books ever written on investing - as well as on technology”.
Sandy’s thesis is that the biggest technological innovations in the world have followed similar market and social patterns - scepticism is replaced by enthusiasm: venture capital is supplied; many companies are started and their stocks rise. But as the technology is developed and financial reality sets in, companies disappear, stocks collapse, and naive investors lose money.
This diagram of the Gartner Hype Cycle illustrates it well.
Through detailed research, Sandy captured this pattern and examined the impact of some of the greatest technology inventions of the past 200 years, including the railway boom, the telegraph and telephone, the development of the automobile industry, the discovery of crude oil, rise of the PC and wireless world.
When I think about the invention and subsequent impact of past technologies on our society, they are just as consequential as what we have seen in the last decade.
What can we learn from these historical episodes and where the investment opportunities are in technology.
These are my 5 lessons:
Tech investing is hard. It’s difficult to know the winners in advance. It’s hard to know which founders will win, which version of the technology will succeed. It’s much easier to see the impact on the world if the technology works, much harder to see who the winners will be.
There have been many cycles of new technologies, where the technology initially seemed new and ground breaking, and quickly the technology becomes commoditised and competitive. Think about technologies like canals, radio waves, telephones, cars, railways. These were the high tech ideas of their time.
There are a few “dependable” ways to make money in technology:
Have a monopoly
Have a cost advantage
Be first and become the market standard
Out operate others
Questions to ask yourself when investing:
Who will be the losers from this, often the losers from the technology are more obvious than the winners?
Where are we in the hype cycle?
What is the moat protecting the business? A patent, a monopoly, regulation?
Can this founder / CEO sell through a downturn, can they create investor demand to sustain the capital the business will need?
Societal issues to consider with new technologies: What are the broader structural societal changes that will come from this technology, trend or company? What could it do to society?
Often the story of new technologies is deeply intertwined with individuals. For example Edison and electricity and lightbulb, Rockefeller and oil, Vanderbilt and railways.
A great podcast series worth checking out is “Founders” by David Senra. He has some great episodes on Edison, Gould, Einstein, which really bring to life the time, the technology and the founder.
A recent episode covers Peter Thiel’s classic book: Zero to One, On Start ups and How to Build the Future.
C. Calling Bullsh*t
Is it me or is there just a lot of BS out there?
Brandolini's law states:
The amount of energy needed to refute bullsh*t is an order of magnitude bigger than that needed to produce it.
We live in a world where the quality and accuracy of information are no longer important.
Satirist Jonathan Swift wrote that:
Falsehood flies, and truth comes limping after it.
When you combine Brandolini’s principle with Swift’s observation, you end up in a world full of Bullsh*t.
Based on a popular course at University of Washington, Calling Bullsh*t gives you tools to see through obfuscations, deliberate and careless that dominate our lives.
To begin, the authors define Bullshit as:
Involving language, figures, data and other forms of presentation intended to persuade or impress an audience by distracting, over whelming, or intimidating them with a blatant disregard for truth, logical coherence, or what information is actually being conveyed.
Across a few chapters, they cover a few sources of Bullsh*t:
Causation not Correlation
Numbers and Nonsense
Selection Bias
Chart Crimes
Call Bullshit on Big Data and Machine Learning
It’s a fun and easy to read book, and I’ll leave you with the six tools for spotting bullshit:
Always question the source: what incentives and reasons do they have for saying this?
Are you or they making unfair comparisons? Are we comparing like with like?
Does this seem too good or bad to be true? Are you falling for confirmation bias?
Could this fact be off by an order of magnitude?
Maybe there are multiple hypothesis here, rather than one you are considering.
The book will leave you questioning a ton of things you believed.
The author’s were on the Sante Fe institute’s podcast in 2020 discussing their book which will give you a great overview.
and here’s the beginning of their university lecture:
Thank you Oussama Himani for flagging the book.
D. The Technology Section:
1. Good Economist article titled: The bigger is better approach to AI is running out of road.
Key bits (emphasis mine):
But the most consistent result from modern AI research is that, while big is good, bigger is better. Models have therefore been growing at a blistering pace. GPT-4, released in March, is thought to have around 1trn parameters—nearly six times as many as its predecessor. Sam Altman, the firm’s boss, put its development costs at more than $100m. Similar trends exist across the industry. Epoch AI, a research firm, estimated in 2022 that the computing power necessary to train a cutting-edge model was doubling every six to ten months (see chart).
This gigantism is becoming a problem. If Epoch AI’s ten-monthly doubling figure is right, then training costs could exceed a billion dollars by 2026—assuming, that is, models do not run out of data first. An analysis published in October 2022 forecast that the stock of high-quality text for training may well be exhausted around the same time. And even once the training is complete, actually using the resulting model can be expensive as well.
Instead, researchers are beginning to turn their attention to making their models more efficient, rather than simply bigger.
One approach is to make trade-offs, cutting the number of parameters but training models with more data. In 2022 researchers at DeepMind, a division of Google, trained Chinchilla, an LLM with 70bn parameters, on a corpus of 1.4trn words. The model outperforms GPT-3, which has 175bn parameters trained on 300bn words. Feeding a smaller LLM more data means it takes longer to train. But the result is a smaller model that is faster and cheaper to use.
Another option is to make the maths fuzzier. Tracking fewer decimal places for each number in the model—rounding them off, in other words—can cut hardware requirements drastically. In March researchers at the Institute of Science and Technology in Austria showed that rounding could squash the amount of memory consumed by a model similar to GPT-3, allowing the model to run on one high-end gpu instead of five, and with only “negligible accuracy degradation”.
Researchers at the University of Washington have invented a more efficient method that allowed them to create a new model, Guanaco, from LLAMA on a single GPU in a day without sacrificing much, if any, performance. Part of the trick was to use a similar rounding technique to the Austrians. But they also used a technique called “low-rank adaptation”, which involves freezing a model’s existing parameters, then adding a new, smaller set of parameters in between. The fine-tuning is done by altering only those new variables. This simplifies things enough that even relatively feeble computers such as smartphones might be up to the task. Allowing LLMs to live on a user’s device, rather than in the giant data centres they currently inhabit, could allow for both greater personalisation and more privacy.
A team at Google, meanwhile, has come up with a different option for those who can get by with smaller models. This approach focuses on extracting the specific knowledge required from a big, general-purpose model into a smaller, specialised one. The big model acts as a teacher, and the smaller as a student. The researchers ask the teacher to answer questions and show how it comes to its conclusions. Both the answers and the teacher’s reasoning are used to train the student model. The team was able to train a student model with just 770m parameters, which outperformed its 540bn-parameter teacher on a specialised reasoning task.
A final option is to improve the chips on which that code runs. GPUs are only accidentally good at running AI software—they were originally designed to process the fancy graphics in modern video games. In particular, says a hardware researcher at Meta, GPUs are imperfectly designed for “inference” work (ie, actually running a model once it has been trained). Some firms are therefore designing their own, more specialised hardware. Google already runs most of its AI projects on its in-house “TPU” chips. Meta, with its mtias, and Amazon, with its Inferentia chips, are pursuing a similar path.
That such big performance increases can be extracted from relatively simple changes like rounding numbers or switching programming languages might seem surprising. But it reflects the breakneck speed with which LLMs have been developed. For many years they were research projects, and simply getting them to work well was more important than making them elegant. Only recently have they graduated to commercial, mass-market products. Most experts think there remains plenty of room for improvement. As Chris Manning, a computer scientist at Stanford University, put it: “There’s absolutely no reason to believe…that this is the ultimate neural architecture, and we will never find anything better.”
2. Jensen Huang of Nvidia spoke with Nicolai Tangen at the Norwegian Sovereign Wealth Fund on his great podcast In Good Company. Wide ranging conversation between two smart dudes. Thank you Can Elbi for flagging.
Three big questions: What’s the most important problems that AI can solve? How close are we to artificial general intelligence? And how can we use AI responsibly?
3. OpenAI CEO Sam Altman spoke at Bloomberg Live on the Future of AI. It’s a good and fun 20-min interview.
4. Marc Andreeessen of A16Z was on with Lex Fridman discussing Future of the Internet, Technology and AI. Lots of cool bits here, here are my favourite spots:
(36:56) - AI startups, (42:18) - Future of browsers, (1:25:54) - Why AI will save the world, (1:33:52) - Dangers of AI, (2:31:29) - AI and the economy, (2:37:37) - China, (2:41:49) - Evolution of technology, (2:51:08) - How to learn, (2:59:17) - Advice for young people
Believe it or not, that “♡ Like” button is a big deal – it serves as a proxy to new visitors of this publication’s value. If you enjoyed this, don’t be shy.
Have a great weekend.