Forum Topics AI Plays
Chagsy
Added 6 months ago

The amazing performance of Nvidia and to a lesser extent the Magnificent 7 has been well broadcast. I am sure most of us have benefited to some degree from owning some of these stocks either individually or as a part of a broader ETF.

There has been a lot of discussion about how to best benefit, as an investor, from the potential of AI. Most of the well known exposures to AI are richly valued, and future outsized gains at today's prices assume a lot of things not only continue in a linear fashion, but indeed an exponential one. It's quite possible that may come to pass. Direct correlations between the internet boom and the AI boom may not be accurate given the fact that as AI improves, its rate of improvement improves also. Well, probably. The counter-narrative is that when a new investment idea appears, the long term increase is under-priced but the near term gains are over-priced; so we should expect a significant dip for a few years before a more sustained and full-throated upswing. If this approach is coupled with the theory that picking the individual winners early in a new paradigm is impossible, then the thoughtful investor may chose to stay their hand until things become clearer or chose a different avenue to benefit from the trend.

The two most common options are increasing your exposure to data centres (NextDC and GMG, but to a lesser degree Infratil) and other spade sellers (companies providing power/renewable energy/essential materials/componentry for data centres etc). Most of these plays seem fully valued as well.

I have read two articles in the last 24 hrs that have suggested Robotics could be an under appreciate beneficiary: Tim Boreham's column in The Australian, and a series of articles in The Economist. The most directly relevant I have copied below. The other thematic could be cybersecurity.

If you don't want to invest in International stocks directly, then there are two ETFs on the ASX directed at the Robotics theme: ROBO and RBTZ.

There are some significant differences, RBTZ is much more concentrated holding 42 vs ROBO's 77 companies:

RBTZ

5b8b8c843cb7ec7ac9f8cb762fc3041f09ecc9.png

ROBO

9560bc4801be1d277769df16eab890aa41c5b7.png


Both are of similar size ~ $237m vs $264m and both have similar MERs of ~0.6%. The small size of the funds would have to be considered a risk.

as for TTM PE ratios, although I'm not sure of the relevance in such a speculative, high growth area, ROBO sits 30 at and RBTZ at 46 (according to Yahoo finance)

I'm unconvinced of the merits of this as a strategy as yet, but it's an interesting space in which to do some digging.

(DISC: I hold Infratil and HACK, and have exposure to various "Nasdaq/Technology" ETFs)


“WHAT DO YOU see right now?” a man asks a humanoid robot standing behind a table. “I see a red apple on a plate in the centre of the table, a drying rack with cups and a plate, and you standing nearby, with your hand on the table,” the robot replies. “Great—can I have something to eat?” says the man. “Sure thing,” says the robot. It flexes its arms, picks up the apple with its right hand, passes it to its left hand, and then hands it to the man. He asks the robot to explain its actions. “So I gave you the apple, because it’s the only, er, edible item I could provide you with from the table,” it replies.

This demonstration, shown in a video released by Figure, a robotics startup, in March, caused widespread amazement. It shows a robot that can hold spoken conversations, recognise and manipulate objects, solve problems and explain its actions. And Figure is not alone in producing such impressive results. After years of slow progress, robots suddenly seem to be getting a lot cleverer. What changed?

The magic ingredient is artificial intelligence (AI). Academic researchers, startups and tech giants are taking advances in AI, such as large language models (LLMs), speech synthesis and image recognition, and applying them to robotics. LLMs are known for powering chatbots like ChatGPT—but it turns out that they can help power real robots, too. “The algorithms can transfer,” says Peter Chen, chief executive of Covariant, a startup based in Emeryville, California. “That is powering this renaissance of robotics.”

The robot in Figure’s video had its speech-recognition and spookily lifelike speech-synthesis capabilities provided by OpenAI, which is an investor in the company. OpenAI shut down its own robotics unit in around 2020, preferring instead to invest in Figure and other startups. But now OpenAI has had second thoughts, and in the past month it has started building a new robotics team—a sign of how sentiment has begun to shift.

A key step towards applying AI to robots was the development of “multimodal” models—AI models trained on different kinds of data. For example, whereas a language model is trained using lots of text, “vision-language models” are also trained using combinations of images (still or moving) in concert with their corresponding textual descriptions. Such models learn the relationship between the two, allowing them to answer questions about what is happening in a photo or video, or to generate new images based on text prompts.

Wham, bam, thank you VLAM

The new models being used in robotics take this idea one step further. These “vision-language-action models” (VLAMs) take in text and images, plus data relating to the robot’s presence in the physical world, including the readings on internal sensors, the degree of rotation of different joints and the positions of actuators (such as grippers, or the fingers of a robot’s hands). The resulting models can then answer questions about a scene, such as “can you see an apple?” But they can also predict how a robot arm needs to move to pick that apple up, as well as how this will affect what the world looks like.

In other words, a VLAM can act as a “brain” for robots with all sorts of bodies—whether giant stationary arms in factories or warehouses, or mobile robots with legs or wheels. And unlike LLMs, which manipulate only text, VLAMs must fit together several independent representations of the world, in text, images and sensor readings. Grounding the model’s perception in the real world in this way greatly reduces hallucinations (the tendency for AI models to make things up and get things wrong).


More power to your elbowPhotograph: Sereact

Dr Chen’s company, Covariant, has created a model called RFM-1, trained using text, images, and data from more than 30 types of robots. Its software is primarily used in conjunction with “pick and place” robots in warehouses and distribution centres located in suburban areas where land is cheap, but labour is scarce. Covariant does not make any of the hardware itself; instead its software is used to give existing robots a brain upgrade. “We can expect the intelligence of robots to improve at the speed of software, because we have opened up so much more data the robot can learn from,” says Dr Chen.

Using these new models to control robots has several advantages over previous approaches, says Marc Tuscher, co-founder of Sereact, a robotics startup based in Stuttgart. One benefit is “zero-shot” learning, which is tech-speak for the ability to do a new thing—such as “pick up the yellow fruit”—without being explicitly trained to do so. The multimodal nature of VLAM models grants robots an unprecedented degree of common sense and knowledge about the world, such as the fact that bananas are yellow and a kind of fruit.

Bot chat

Another benefit is “in-context learning”—the ability to change a robot’s behaviour using text prompts, rather than elaborate reprogramming. Dr Tuscher gives the example of a warehouse robot programmed to sort parcels, which was getting confused when open boxes were wrongly being placed into the system. Getting it to ignore them would once have required retraining the model. “These days we give it a prompt—ignore open boxes—and it just picks the closed ones,” says Dr Tuscher. “We can change the behaviour of our robot by giving it a prompt, which is crazy.” Robots can, in effect, be programmed by non-specialist human supervisors using ordinary language, rather than computer code.

Such models can also respond in kind. “When the robot makes a mistake, you can query the robot, and it answers in text form,” says Dr Chen. This is useful for debugging, because new instructions can then be supplied by modifying the robot’s prompt, says Dr Tuscher. “You can tell it, ‘this is bad, please do it differently in future.’” Again, this makes robots easier for non-specialists to work with.

Being able to ask a robot what it is doing, and why, is particularly helpful in the field of self-driving cars, which are really just another form of robot. Wayve, an autonomous-vehicle startup based in London, has created a VLAM called Lingo-2. As well as controlling the car, the model can understand text commands and explain the reasoning behind any of its decisions. “It can provide explanations while we drive, and it allows us to debug, to give the system instructions, or modify its behaviour to drive in a certain style,” says Alex Kendall, Wayve’s co-founder. He gives the example of asking the model what the speed limit is, and what environmental cues (such as signs and road markings) it has used to arrive at its answer. “We can check what kind of context it can understand, and what it can see,” he says.

As with other forms of AI, access to large amounts of training data is crucial. Covariant, which was founded in 2017, has been gathering data from its existing deployments for many years, which it used to train RFM-1. Robots can also be guided manually to perform a particular task a few times, with the model then able to generalise from the resulting data. This process is known as “imitation learning”. Dr Tuscher says he uses a video-game controller for this, which can be fiddly.

But that is not the only option. An ingenious research project at Stanford University, called Mobile ALOHA, generated data to teach a robot basic domestic tasks, like making coffee, using a process known as whole-body teleoperation—in short, puppetry. The researchers stood behind the robot and moved its limbs directly, enabling it to sense, learn and then replicate a particular set of actions. This approach, they claim, “allows people to teach arbitrary skills to robots”.

Investors are piling in. Chelsea Finn, a professor at Stanford who oversaw the Mobile ALOHA project, is also one of the co-founders of Physical Intelligence, a startup which recently raised $70m from backers including OpenAI. Skild, a robotics startup spun out of Carnegie Mellon University, is thought to have raised $300m in April. Figure, which is focusing on humanoid robots, raised $675m in February; Wayve raised $1.05bn in May, the largest-ever funding round for a European AI startup.

Dr Kendall of Wayve says the growing interest in robots reflects the rise of “embodied AI”, as progress in AI software is increasingly applied to hardware that interacts with the real world. “There’s so much more to AI than chatbots,” he says. “In a couple of decades, this is what people will think of when they think of AI: physical machines in our world.”

As software for robotics improves, hardware is now becoming the limiting factor, researchers say, particularly when it comes to humanoid robots. But when it comes to robot brains, says Dr Chen, “We are making progress on the intelligence very quickly.” 

17

RhinoInvestor
Added 6 months ago

@Chagsy thanks for the extensive detail.

I know I personally have moved away from thematic ETFs (I’ve had a few of them earlier in my investing and still hold some today e.g ASIA, ATEC, FANG, IIND, SEMI). Some have done well, some no so well (might be timing, might be Mr Xi’s fault for ASIA). I like the ability to invest into a theme and also sometimes get access to the type of companies that would be hard to invest in directly especially in countries such as India, Korea and to a lesser extent Japan and China). I’m not using them any more as I think just like any basket you are always going to get some not-so-good stocks holding things back and IMHO the extra expense compared to a broad based index fund or direct stock holdings seems a little high to me.

On the robot front, and I don’t know if it’s a good or bad stock is Zebra Technologies … whose heritage is barcode scanning … seems to me to be quite a way from what most people perceive a robot to be (i.e. C3PO).

I’ve also been doing some research into some early stage companies who are building robots (they are for very specific industrial activities such as checking the quality of concrete on the sides of bridges and buildings. Once again, not what you think of but the founder had a bunch of tremendous business cases where they have displaced humans dangling off ropes in dangerous situations and saved a lot of money at the same time.

I’ve recently been doing some research into Tesla (as a result of its latest stock price declines … I’m always looking for a contrarian position and now that EV’s seem to have lost some of their shine its making me do some digging). I find the optionality that they have beyond cars pretty interesting in areas such as robotics and energy. https://electrek.co/2024/03/27/elon-musk-tesla-optimus-robot-cost-less-than-half-car/ I guarantee you, if he Optimus can clean up after my kids for 20K, I’m selling all my DRO stock to buy one. It’s pretty interesting everyone things of humanoid robots but I think we are going to see a range of different formats … and hopefully some Aussie Small Caps that we can dig into in Strawman over time … https://www.f6s.com/companies/robotics/australia/co

Its certainly an interesting area and probably worth also assessing one’s current holdings to try and see whether robotics will have a positive or negative impact on them.

16

mikebrisy
Added 6 months ago

@Chagsy I thoroughly enjoyed this read.

I used to own some of the ETFs that gave me overseas exposure to themes like $ROBO and $HACK, but I decided to simplify things when I realised how much the big tech companies are weighted in the global index and developed-market trackers, which make up a large portion of my retirement funds and total asset base. NVIDIA makes up about 2-2.5% of my total assets, purely passively. Its currently over 4% of the MSCI Global Index

I now am just purely [Global Passive] + [ASX stock picking]. Of course, each to their own.

In terms of how I am playing the AI revolution, I think firms like $RMD in healthcase and $WTC in tech (currently not held) will benefit signficantly from harnessing AI in their business models, in the same way everyone now uses the internet - albeit some better than others. They stand out because of their huge datasets. You can also add $TNE (held), $XRO (not held) and $PME (not held).

For picks and shovels, I am going with $IPG, which is also exposed to the electrification of the economy. $SXE is another in this group (not held).

Reading your section on VLAM, I was thinking just how many orders of magnitude of computing power and electricity will be needed beyond what we have today, once these technologies are fully deployed. A lof of these chips will be embedded in the end devices, and so that's also going to drive $ALU in the long term.(Sniff)

I have worried about getting onboard popular trades like $NXT and $GMG, because everyone is talking about them, and the contrarian in me worries about overpaying. (Of course, I was worried about overpaying for $GMG at $20, and that didn't serve me well!!)

Anyway, I think your hyptothesis about the internet-like-hype-cycle is probably right: long term vs. short term and picking winners. Well, confirmation bias anyway, as that is how I am looking at things.

BTW @RhinoInvestor on Zebra Technologies, they are big in the warehouse IoT, scanning and automation space. They do have some robots- see pic below (a C3PO ancestor, perhaps) - but importantly, their scanners and computers are important components of warehouse auotmation, which is one of the major segments of automation, and they get integrated into other systems. So, I think they fit solidly into any robotics ETF. Of course, there are a lot of firms playing in this space, an I don't know how they stack up. Their SP is still 50% off their COVID peak, when I think they got caught up in the tech hype that people would never return to work and robots would be needed everywhere. (A supply chain Zoom-like thematic).

Anyway, weekends are a great time to think about some of these big picture ideas!

2468bd5afca5a88c3b6d7a00b676a4668f8661.png



14

Chagsy
Added 6 months ago

Thanks @mikebrisy and @RhinoInvestor

I have likewise re allocated much of my sensible investing (super) to vanilla low cost products including the World MSCI which is 30% of my super. I have currently got 15% in the Qsuper Bond product which I aim to use as dry powder in the next correction. The remaining 50% is in (self invest) stock picks that are limited to the ASX 300. Mostly boring quality names but a few ETFs such as HACK which is ~6% of that 50%.

Outside of Super, GNP and Infratil are 2 of my largest holdings for the same reasons you outline but I’m mulling a 5% move in my self invest super into a robotics themed ETF. However, for the reasons you outlined I’m not sure it’s really worth the increased costs.


14

Solvetheriddle
Added 6 months ago

@Chagsy well worth following, the game has some way to play out and as you say the winners may yet to emerge and other beneficiaries may be a surprise. One lens i would be weary of, is local stocks "aping" the international winners. the domestic market has shown a propensity to clone counters for the local market to "play" in hot spaces. in time we find that they were poor imitations (and expensive ones at that) of the real deal, which are usually overseas.

14

reddogaustin
Added 6 months ago

@mikebrisy you have identified the elephant in the AI room. Electricity generation. AI is big, huge, generational even. But electricity changes need to be just as big.

The numbers here cannot be overstated. I've read somewhere in my journeys that DC builds today are being delayed worldwide because of the lack of electricity generation available.

Renewables cannot at this time provide the generation rate/sustain peak draw etc.

Power generation for retail (our homes) gets talked about alot, but the second elephant is commercial power draw; smelters, data centres etc.

Summary. We can win with AI stocks, but also power generation/infrastructure stocks.

15

mikebrisy
Added 6 months ago

@reddogaustin yes, it’s an issue for sure. When there is a constraint like that in a value chain, prices, costs, and demand evolve in response. So I think prices for AI products will rise, which will challenge use cases to those that really add value. Those kinds of constraints can also bring to an end valuations based on exponential growth,… so chips, datacentres, etc. I have no idea about timing or magnitude, so I am keeping away from it in terms of investing other than picks and shovels, which tend to be beneficiaries, whatever the industry.

12

Bushmanpat
Added 6 months ago

@reddogaustin The power use is insane and growing. Apparently, in 2022, data centres accounted for almost 20% of Ireland's power consumption!

12

RhinoInvestor
Added 6 months ago

Interestingly a lot of the big cloud players are looking at nuclear to power their DCs (and by extension a large percentage of AI workloads)


If we are going to park a small modular reactor in big black tube (aka. AUKUS Submarine) in Sydney Harbour, maybe we should plonk some SMRs next to the NextDC data centres in Sydney … what could possibly go wrong?

Just looking at the AI picks and shovels play … NuScale Power is up to $8.09 (157% in the last 6 months), BWX Technologies is trading on a 32x TTM PE ratio. I’ve been contemplating looking into this trend as we are always going to need baseload power from somewhere.

13

mikebrisy
Added 6 months ago

@RhinoInvestor looks like big tech is going from "capital light" (software) to "not-so-capital-light" (software, devices, cloud infrastructure) to "capital-intensive" (software, devices, nuclear power stations).

Question: has any firm investing in nuclear power every made an economic profit?

12