
Discover more from A Few Things....
A Few Things: Attia on Outlive, Built to Move, 5 Trends, Summers on US, Make Something Wonderful, News You Might Have Missed, Post GPT World, Risks of AI....
April 21 2023
I am sharing this weekly email with you because I count you in the group of people I learn from and enjoy being around.
You can check out last week’s edition here: How To Be Successful, Prof Damodaran on Markets & Future of Education, Built To Move, Will AI Drive An Earnings Boom, News You Missed, Of Boys & Men, How To Use ChatGPT Like a Pro....
Believe it or not, that “♡ Like” button is a big deal – it serves as a proxy to new visitors of this publication’s value. If you enjoy this article, don’t be shy :)
Quotes I am Thinking About:
“We are what we repeatedly do. Excellence, then, is not an act, but a habit.”
– Aristotle
"The purpose of knowledge is action, not knowledge."
- Aristotle
“Never mistake motion for action”
- Ernest Hemingway
“There is no past and no future; no one has ever entered those two imaginary kingdoms. There is only the present.”
- Leo Tolstoy
“The way to be safe is never to be secure.”
- Benjamin Franklin
A. A Few Things Worth Checking Out:
1. We discussed Peter Attia’s book Outlive, two weeks back. The book has changed my nutrition and exercise habits.
He was recently on Invest Like the Best: The Portfolio to Live Longer and Modern Wisdom: The Health Metrics That Matter Most For Longevity.
2. Being a finance person, I like high ROI (return on investment) activities, and this podcast with Kelly Starrett on Modern Wisdom titled: 8 Essential Strategies to Maximise Your Fitness does exactly that.
Kelly is the authority on this subject, having written the NYT best selling books: Becoming a Supple Leopard, Ready to Run. He is also the founder of The Ready State, which has revolutionised the field of movement health, mobility and performance therapy. He also consults with countless athletes and coaches across the NFL, NBA, US Olympic Team and US elite armed forces.
Here is the pre-amble to the podcast: Deep down we know we should take better care of ourselves. The random aches, pains and cracks many of us have become accustomed to simply shouldn't be a part of our everyday experience. Thankfully, Kelly has broken down his philosophy into simple vital signs you should focus on to move smoother, sleep better, live longer and train harder. Expect to learn how to fix your posture if you sit at a desk all day, what nutrient dense foods you need to be eating more of, why you need to spend more time on the ground, how to burn an extra 100,000 calories per year with one change, the simplest way to hit 10k steps per day, the most important health metrics you need to be tracking, how to stay disciplined with your new fitness habits and much more...
Highly recommended.
3. Five big trends that have changed in the last few years by Noah Smith. Great charts.
College tuition is going down now
Immigration has bounced back
Fertility rates aren’t what they used to be
China’s popularity as an investment destination is falling
Inequality is falling (or at least has plateaued)
4. I keep hearing about multi-polarity and the fading global influence of the US. With that backdrop, watch this interview with Larry Summers. He sees “troubling” signs that the US is losing influence as the pace of globalisation fades.
"I think there's a growing acceptance of fragmentation, and maybe even more troubling, I think there's a growing sense that ours may not be the best fragment to be associated with”
5. Make Something Wonderful: A curated collection of Steve’s speeches, interviews and correspondence, offers an unparalleled window into how one of the world’s most creative entrepreneurs approached his life and work.
In these pages, Steve shares his perspective on his childhood, on launching and being pushed out of Apple, on his time with Pixar and NeXT, and on his ultimate return to the company that started it all.
If you’d like a preview of the book, David Senra covered it with great enthusiasm on the Founders podcast and Neckar covered it on Investor’s Library.
6. Are you busier than you'd like to be? Do you find yourself thinking, 'things will slow down soon'?
That might be true. But chances are if you're busy, it's your own fault. And there's a pretty simple solution to the problem.
7. The always entertaining Jim Rogers shared his views on the world and markets.
B. News and Charts You Might Have Missed:
1. India’s population is on the brink of overtaking China’s, UN says. The United Nations estimates India will have a population of 1.4286 billion in the middle of 2023, ahead of 1.4257 billion in mainland China. China’s population fell for the first time in decades last year, while India’s has continued to grow.
2. There are as many millionaires in New York City as there are people living in Anaheim, California. According to a new report NYC is home to 340k millionaires and has retained its title of the wealthiest city in the world, boasting ~50k more millionaires than second-place Tokyo.
The Bay Area has more billionaires than NYC, though, with 63 to NYC’s 58. Some smaller urban areas in the Sunbelt region are becoming more attractive to wealthy retirees and tech multimillionaires due to lower tax burdens, more space and remote work opportunities. Houston, for instance, is home to 20 billionaires.
3. OpenAI has just over a week to comply with European data protection laws following a temporary ban in Italy and a slew of investigations in other EU countries. If it fails, it could face hefty fines, be forced to delete data, or even be banned.
The issue is the large appetite for training data that the GPT models command. GPT-2 was based on 40 gigabytes of text, GPT-3 trained on 570 gigabytes of data, and GPT-4 is being shared on an as-yet unknown but presumably large amount of data that is now under scrutiny in the European Union, which has laws protecting the data of its citizens. Italy gave OpenAI until April 30 to comply with asking people to consent to having their data scraped or prove a legitimate interest in collecting it.
C. The Technology Section:
1. Great forward looking article on 6 Phases of the Post-GPT World. This one stayed with me and will inform how I see the world and make decisions.
Here is the punchline:
Businesses and people being articulated by custom AI models that you can interact with using natural language, using APIs.
AI Assistants that deeply know us through our individual models, and spend every moment of every day looking for ways to shape the world to our preferences.
Somewhere between tens and hundreds of millions of people who cannot do any knowledge work better than an AI.
Somewhere between millions and hundreds of millions of people who can now create code, prose, stories, films, and other types of art that competes the highest levels in a global marketplace.
2. A thoughtful FT article by Ian Hogarth. We must slow down the race to God-like AI: I’ve invested in more than 50 artificial intelligence start-ups. What I’ve seen worries me.
Key bits, further emphasis mine:
A three-letter acronym doesn’t capture the enormity of what AGI would represent, so I will refer to it as what is: God-like AI. A superintelligent computer that learns and develops autonomously, that understands its environment without the need for supervision and that can transform the world around it. To be clear, we are not here yet. But the nature of the technology means it is exceptionally difficult to predict exactly when we will get there. God-like AI could be a force beyond our control or understanding, and one that could usher in the obsolescence or destruction of the human race.
Why are these organisations racing to create God-like AI, if there are potentially catastrophic risks? Based on conversations I’ve had with many industry leaders and their public statements, there seem to be three key motives. They genuinely believe success would be hugely positive for humanity. They have persuaded themselves that if their organisation is the one in control of God-like AI, the result will be better for all. And, finally, posterity.
Those of us who are concerned see two paths to disaster. One harms specific groups of people and is already doing so. The other could rapidly affect all life on Earth.
OpenAI, DeepMind and others try to mitigate existential risk via an area of research known as AI alignment. Legg, for instance, now leads DeepMind’s AI-alignment team, which is responsible for ensuring that God-like systems have goals that “align” with human values. An example of the work such teams do was on display with the most recent version of GPT-4. Alignment researchers helped train OpenAI’s model to avoid answering potentially harmful questions. When asked how to self-harm or for advice getting bigoted language past Twitter’s filters, the bot declined to answer.
Alignment, however, is essentially an unsolved research problem. We don’t yet understand how human brains work, so the challenge of understanding how emergent AI “brains” work will be monumental. When writing traditional software, we have an explicit understanding of how and why the inputs relate to outputs. These large AI systems are quite different. We don’t really program them — we grow them. And as they grow, their capabilities jump sharply. You add 10 times more compute or data, and suddenly the system behaves very differently. In a recent example, as OpenAI scaled up from GPT-3.5 to GPT-4, the system’s capabilities went from the bottom 10 per cent of results on the bar exam to the top 10 per cent.
What is more concerning is that the number of people working on AI alignment research is vanishingly small. For the 2021 State of AI report, our research found that fewer than 100 researchers were employed in this area across the core AGI labs. As a percentage of headcount, the allocation of resources was low: DeepMind had just 2 per cent of its total headcount allocated to AI alignment; OpenAI had about 7 per cent. The majority of resources were going towards making AI more capable, not safer. I think about the current state of AI capability vs AI alignment a bit like this:
We are not powerless to slow down this race. If you work in government, hold hearings and ask AI leaders, under oath, about their timelines for developing God-like AGI. Ask for a complete record of the security issues they have discovered when testing current models. Ask for evidence that they understand how these systems work and their confidence in achieving alignment. Invite independent experts to the hearings to cross-examine these labs.
3. Azeem Azhar at Exponential View had a good post titled: Exponential LLMs and the Copernican moment wondering how LLMs will change our world view.
Key bits:
Ultimately institutional frames, whether they are regulations or laws, are trade-offs, right? They are trade-offs between freedom and benefits that might be delivered and delivered unevenly, and costs that are attached to them.
We have to, of course, take things seriously. But we should also explore the extent to which we need to change our institutional frames in the face of changes to our sociotechnical systems.
Laws that exist and laws that don’t exist (“not-laws”) are, in many cases, trade-offs. Laws the regulate economic activities are about articulating trade-offs first and foremost. They determine who gets the benefit and whose behaviour is constrained. Patent laws prevent me from selling cheaply replicated semaglutide, depriving me of freedom of action. Those same laws keep semaglutide expensive and hard to access. Why? Because the argument is that if we trade off today’s freedoms for tomorrow’s innovations, we’ll get more breakthroughs in the future. Maybe, maybe not. But that is the heart of the argument.
4. Super post thinking about the future of software titled: The AI-based Architecture That’ll Replace Most Existing Software.
Key bits:
Our current software is Circuit-based, meaning the applications have explicit and rigid structures like the etchings in a circuit board. Inputs and outputs must be explicitly created, routed, and maintained. Any deviation from that structure results in errors, and adding new functionality requires linear effort on the part of the organization’s developers.
New software will be Understanding-based. These applications will have nearly unlimited input because they’re based on natural language sent to a system that actually understands what you’re asking. Adding new functionality will be as simple as asking different questions and/or giving different commands.
5. Great twitter thread on 24 great AI resources.
Believe it or not, that “♡ Like” button is a big deal – it serves as a proxy to new visitors of this publication’s value. If you enjoy this article, don’t be shy.
“Make your avocation your vocation. Make what you love your work. The journey is the reward. People think that you’ve made it when you’ve gotten to the end of the rainbow and got the pot of gold. But they’re wrong. The reward is in the crossing the rainbow. That’s easy for me to say—I got the pot of gold (literally). But if you get to the pot of gold, you already know that that’s not the reward, and you go looking for another rainbow to cross. Think of your life as a rainbow arcing across the horizon of this world. You appear, have a chance to blaze in the sky, then you disappear. To know my arc will fall makes me want to blaze while I am in the sky. Not for others, but for myself, for the trail I know I am leaving.”
- Steve Jobs