The world we live in is at a major inflection point, as the symbiotic relationship between generative AI and semiconductors will impact society for generations to come. With tremendous advances in semiconductor technology, the result is an explosion of new applications for AI.
In my opinion, the resulting ubiquitous deployment of AI creates what I see as a new "three-body problem." It is something which we must be cognizant of to understand how we maximize the incredibly good impact of AI and implement guardrails to minimize the potentially bad outcomes brought about by the prevalence of AI in all aspects of our lives.
But wait, you might ask, "What is the three-body problem?" Well, if any of you have seen the Netflix drama on this issue, or even read the 2008 novel by Chinese science fiction author Liu Cixin, then you might have an idea. But for those who have not, here are some explanations.
It is a question of trying to resolve the physics of celestial mechanics. For example, Brittanica states that in astronomy, the three-body problem is the challenge of determining the motion of three celestial bodies moving under no influence other than that of their mutual gravitation. It says that no general solution of this problem (or the more general problem involving more than three bodies) is possible, as the motion of the bodies quickly becomes chaotic.
Taking this into the realm of science fiction, Chinese science fiction author Cixin Liu, in his novel The Three-Body Problem, asks what it would mean for the human race to come in contact with an extraterrestrial intelligence. An NPR book review from 2014 puts this into context as to what it means for the human society. "By the time the book hits its peak, it's unveiled a conspiracy that spans solar systems -- one that not only threatens to alter the human race, but the very building blocks of physics that we've evolved to understand."
This is where we will start to face a parallel of the three-body problem as we start to consider the impacts of generative AI, semiconductors and humanity. As I mentioned at the beginning of this article, we are at a major inflection point: the ubiquitous deployment of AI, in all aspects of our industries, our businesses and our lives, creating a market pull for semiconductor innovation to further improve our AI-centric outcomes and experiences.
But does that now mean we are on the verge of science fiction becoming non-fiction, whatever we call it -- whether it is artificial general intelligence (AGI) or GenAI? A simple framework is to look at the technology adoption curve, also known as the technology hype cycle created by Gartner.
Over the years, I have come to rely on this powerful analytical framework to track the evolution of new technology, from industry and investor hype, all the way through to productive deployment, whether it be for industrial or consumer.
If you have been following the media coverage about the enormous potential benefits and equally troubling dystopian concerns centered on the development of AI, you probably get the sense that we are entering (or some would say we are already in) the trough of disillusionment.
For an interesting analysis, check out a recent article published by Vinod Khosla, "AI: Dystopia or Utopia?"Here's the opening position that Khosla takes:
"I've seen technology reshape our world repeatedly. Previous technology platforms amplified human capabilities but didn't fundamentally alter the essence of human intellect. They extended our reach but didn't multiply our minds.
Artificial intelligence is different. It's past the point where a difference in degree becomes a difference in kind. AI amplifies and multiplies the human brain, much like steam engines once amplified muscle power. Before engines, we consumed food for energy and that energy we put to work. Engines allowed us to tap into external energy sources like coal and oil, revolutionizing productivity and transforming society. AI stands poised to be the intellectual parallel, offering a near-infinite expansion of brainpower to serve humanity."
Although the California Senate bill was vetoed by Gov. Gavin Newsom, SB-1047 Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,it has raised awareness to kick-start dialogue among industry and governments here and across the globe to determine how best to minimize the negative aspects and focus on all the societal benefits that AI can bring. It is encouraging to see that there is now a movement underway to build out a Global AI Safety Summit, convening their next meeting in San Francisco later this month. There is now also a newly launched U.S.-based NIST AI Safety Institute.
In my opinion, there are some fundamental questions that need to be addressed, hopefully sooner than later. Have we reached the bottom of the trough of disillusionment? And if not, what is still to come?
While there is now a push for human-centered AI, responsible innovation and "humans in the loop" seem to try to capture the essence of putting the brakes on the downslope to the bottom of the trough. Is that enough? Can we find a public-private solution?
Finally, how do "we" (collectively as a community) achieve the plateau of productivity, across all aspects of business, society and our planet?
I believe the key challenges for all of us is to collectively solve this new three-body problem to understand how we maximize the incredibly good impact of AI and implement guardrails to minimize the potentially bad outcomes.
We will be discussing topics around this, and on how we go about maximizing the slope of our "AI enlightenment," in the shortest amount of time at the 2024 Silicon Catalyst Semiconductor Industry Forum. Our esteemed panel of technology, venture capital and public policy representatives would be open to hearing your input and adding it to our discussion.