AI is likely the second most important immediate global phenomenon. If it is any hint what I think is the first, Thursday, I heard Sam Altman answer (or rather half-answer) a series of questions from the who’s who of AI at Sequoia’s Ascent conference. The last person to ask a question, and the only one to really captivate Sam, was a senior technologist from the FDIC. Sam turned the table and asked “how many other banks are similarly situated to SVB” and “could this all be fixed if the Fed cut rates by 100 bps?”. This sort of question, the kind upon which millions of Americans’ economic well being depends, seems far from being answerable by AI despite the recent advances, at least partially due to the availability of data. If arguably the most knowledgeable and strategically positioned person in AI asks questions limited by the availability of data1, it is worth listening.
Nonetheless, I have compiled a list of my own (much dumber) questions from the event about the recent advances in artificial intelligence:
Why has AI been so good at human prediction tasks but unimpressive at hard-to-predict events humans struggle to predict2?
AI has not been demonstrably successful at stock trading, predicting pandemics, or regional banks’ unrealized loss issues.
A lot of this might be due to repeatability and availability of data. But is that it? And what architectures might solve this.
How do the legal issues around AI training data sets get solved3?
What happens once AI starts to generate music based on, inevitably, Universal Music Group’s intellectual property?
In both the above questions, data ownership and availability becomes key.
In this case, firms that own or store enterprise data seem to be the best positioned. Which firms are best positioned to capture this4?
How important are vector based databases for AI?
Do we need a new database for AI or will Postgresql extensions5 and new data types in an existing RDBMS system solve the problem?
Large language models struggle with math. Somewhat amusingly, one of the most advertised ChatGPT plugins is Wolfram Alpha6.
Is it possible to get to Wolfram Alpha-like reasoning with the current architecture of large language models alone?
What happens in labor markets?
It has become cliche to flip flop between arguments that a large number of jobs that may disappear due to AI versus that historical technology revolutions have always invented new, previously unimaginable jobs.
Unemployment is at near all-time lows, wage growth remains robust, and, save for the recent banking crisis7, rate hikes are intended to cool the economy. Productivity growth from artificial intelligence may well be the antidote for global inflation and economic growth.
Perhaps what is different in this technology platform shift is that white collar and relatively high paying jobs appear more at risk than previous technological disruptions (the plumber appears less threatened than the advertiser or programmer).
Are companies with high white-collar labor expense the biggest beneficiaries (consulting, tech, etc.)? Or large companies that can now modernize legacy systems (banks, healthcare networks, etc.)?
In short, is this a bigger lever for COGS/R&D or SG&A?
I am certainly betting on data availability being the key.
I have found Matt Turck’s framework for evaluating the state of AI helpful:
Beyond the copyright issue mentioned here and the misinformation issues frequently in the news, there are a series of others: https://www.zwillgen.com/privacy/artificial-intelligence-risks-privacy-generative-ai/
I am certainly not unbiased here:
On a side note, this article about Credit Suisse is great: