Yesterday I listened to the following interview of Jack Clark - one of the founders of Anthropic - exploring the big picture of societal and political risks from AI, as well as the opportunity. (This was recorded a few days ago in the margins of the big AI event in Vegas ... that's Las Vegas, not BrisVegas).
My big issue is that whether in the US, UK or Australia (3 countries I follow closely) - I'm not sure the quality of poltical discourse is up to the task. And then there's the geopolitical lens which threatens to spoil any effective multi-national compact.
Fascinating perspectives from one of the leaders in this rapidly changing field.
There's another item on this topic on this channel that will drop next weekend.
There is a lot written about the positive potential of AI but not a lot about the risk of AI. A question from Stephen Mayne to the Catapult CEO during their recent AGM got me thinking about this. He basically wanted to know the degree of exposure that the company had to AI services from 'big tech' and how they would go if those AI service providers jacked up their prices by 30%.
This got me thinking about companies that might initially benefit from adoption of AI, with short term benefits of reduction of staff costs etc., but who may then find themselves completely reliant on those services at the mercy of price hikes. Particularly where the subsequent skill loss in the area prevents rehiring of staff and probable services and technology integration prevents easy replacement by competitors' alternative products.
The negative impact of AI is already being seen in programming, where junior programmers are so reliant on AI as a tool for their programming that they are unable themselves to then identify and rectify errors made by AI in that programming.
It seems to me that investment in companies that become reliant on AI for their business model could become much riskier, and rather you would want to invest in the companies providing the AI to those reliant companies. I would assume the big players, Google, Alphabet and Microsoft are going to be the most dominant, and the next step would be finding companies that are producing AI models for specific industries e.g. for lawyers to produce disclosure, that will become ubiquitous in their field and then likely takeover targets for the big players.
Thoughts?