Forum Topics The Strawman's Weekly
mikebrisy
Added 3 months ago

Another cracking Weekly from @Strawman. Loved it so much I got my BA to spin up a handy EPS Diagnostic Checklist, which I will use.

I did this because even though, as I read the newsletter with my morning coffee, I reckoned I do quite a lot of it, I reflected that my process is a bit random - sometimes focusing on one or more elements while neglecting others.


EPS Diagnostic Framework

93eea62521b07e7bb764d4bffeef9bb0099a40.png


30
Shapeshifter
Added 3 months ago

Another excellent, informative and educational newsletter @Strawman - thanks.

It made me reflect on one of my best recent performers from a share price perspective - Energy One.

Here's what I found:

Underlying EPS up 58% in FY25 and a long term EPS CAGR of 22% since 2016 (share count CAGR of 6% during this period). EPS was depressed in FY23 and FY24 as EOL were investing in the business and costs were up.

Net debt decreased by $7.5m (down 53% on FY24) through improved cash earnings as well as a working capital benefit.

Cash balance doubled to $4m.

Trade and other receivables did increase to $3.8m (up 51% on FY24) put down to build up of project work and some custormers delaying payments into the next FY.

No buybacks.

My conclusion from this quick reflection is EOL are growing EPS rapidly while the balance sheet improves on the background of moderate share count growth.

23

Strawman
Added 3 months ago

Great framework @mikebrisy (which I'm totally going to steal!), and I reckon you're right with EOL example @Shapeshifter

13
mikebrisy
Added 4 months ago

On a rainy Brisbane afternoon, while I just finish a cup of tea before returning to some domestic duties, I read the Weekly @Strawman Newsletter. And it prompts me to share two reflections, which I combine in this one post, even though they are only partly related.

Reflection 1

There is great insight once again in @Strawman's Weekly. There's great value in repeating important insights.

As I get older (now in my 4th decade of business life), while I am perpetually in awe of the seemingly endless fruits of human innovation, it is equally true that many times when I read about the next "latest thing" it is simply a repackaging of ideas that have been around for aeons, or more precisely literally decades.

My areas of specialisation in business include strategy and markets, operations, supply chain, and project management. I have consulted on all and also taught MBAs. However, in each area, the most important things (and in particular the things that firms and leaders get wrong most often) tend to be the some oldest wisdoms and learnings. And time and again, they tend to be very practical, simple, often intuitive ideas.

So once again, the Weekly Letter really resonated with me. Given all the noise and messages we are exposed to each day, it is really good to have the enduring and potent messages (that we forget at our peril) reinforced.

So @Strawman - you will never have nothing to say!

Reflection 2

As I reflect on the amazing advances in AI, that continue to make me more productive in my research, I often wonder what the effect of all the "rubbish" that is out there in the digital world is having on AI effectiveness. This is tangetially related to my first point, because with so much AI-generated content entering the "digital universe" I believe that human critical thinking is becoming even more important, and not outsourceable to AI.

And so I'd like to share something that happened to me last week.

I was sitting in a meeting in a local organisation where I do some volunteer work, and the topic of nuclear energy came up. That morning on LinkedIn, I'd read a post about some recent advances in SMR developments. I enthusiatically added these breaking "insights" into the conversation. One of my colleagues asked me to send them the article after the meeting.

When I got home, I dutifully fired up LinkedIn and searched for the article I'd read that morning. As it didn't have any sources, I turned to ChatGPT4.0 and asked it to rather the sources for me, so that I could send a more authoritative article to my colleague.

Sure enough, my AI BA dutifully came back with a beautifully structured and well-written article about the latest technology I'd enquired about, even providing details of the economics, energy output, and plans by the Japanese Government to roll out the technology by 2030.

But then I looked at the sources. First, all the sources specificially relating to the technology were from blogposts, social media, Reddit, LinkedIn etc. The quality sources were general review articles about developments in SMR technology, but these didn't appear to reference the specific technology of interest.

A strange sensation developed in my gut. Something wasn't right.

As I clicked on each of the "low quality" sources, I found they were recycling the same "facts" with remakable consistency. The only thing that differentiated them was different images,... images that had that characteristic "AI-generated" feel about them. Some of these AI images were even shared between sourses that didn't reference each other, and there was no image credit.

I realised at that point, I'd been duped. (The elapsed time between starting my search and this lighblb going off, was only about 5 minutes, but it felt like an age.)

So now, I asked a balanced but probing question of ChatGPT4.0 "Deep Research" Mode, to investigate whether this breaking-news SMR technology was legitimate or not. And lo and beheld it generated a report setting out all the evidence and arguments why the story was "fake news". Something I'd already concluded with high certainty.

Red-faced, I emailed my colleague and reported that I had unwittingly passed on fake news. I said if he was interested in the state of SMR technology he could read the following link from a report from September 2024 by the IAEA.

When I went back to the original blogpost by a profilic US tech blogger (which seemed to be the primary source from which all the LinkedIn AI-generated fake posts had been spawned, written by multiple "content creators") I can see why it duped me. It had enough elements that were close enough to where the industry is, as to be plausible. I like to think of myself as not easy fooled, given that I've spent at least 20 years in one form or another working in the energy sector. But I had been fooled,... almost.

More and more of the digital content I am scrolling through online is clearly being automatically generated. In this case, I don't know how the tech blogger made the error he did. I don't believe it was a deliberate act of spreading false information, but rather an act of laziness, in generating a daily or weekly blogpost to maintain their place in the algorithm.

So, as the proportion of "garbage" in the digitial universe multiplies, and trains the LLMs, what does that mean for the effectiveness of AI? Clearly alogorithms can be written to QA/QC content, to error trap this kind of issue. But it is evidently not yet effective in the tools I am using.

What are others finding?


50

Lewis
Added 4 months ago

@mikebrisy I had similar thoughts to you reading the email this morning, I was worried @Strawmanwas building up to a hiatus announcement. Warren Buffet has been saying the same things for the best part of a century, the beauty is that every year he finds a new audience, and for the repeat listeners, different parts resonate and different ideas click as their own understanding grows.

On reflection 2, my personal thesis on AI is it will struggle because something (or someone) who is correct 99% of the time is perversely more untrustworthy than something that's right 50/50 or even 75% of the time. If you know it may be wrong, you double check. If you're used to it being infallible eventually you'll take something incorrect into a high stakes decision and get burned. AI is also training itself to appear correct, so it misses a lot of the red flags were used to when dealing with people. I think it falls into the category of "talks the talk but can't walk the walk" until it can solve that problem (or people gain experience and get better at driving it).

31

Solvetheriddle
Added 4 months ago

Mike to be truthful I was shocked when I moved from professional to retail investing how much people relied on google, for “research “, something I had never done. I suppose I had the luxury of research that could be attributed to someone that had their reputation tied to what they wrote. Otherwise I see ent to source documents, everything else was just opinion. Even how I feel uneasy relying on people I don’t know for facts. Maybe that reluctance has reasons lol

35

PortfolioPlus
Added 4 months ago

Well, I found the Strawman article to be compelling reading - almost like a detective novel where the murderer is uncovered on the very last page.

Take heart Strawman, your issue isn't rare. I wrote a monthly subscription business newsletter (Positive Business) for 19 years and many was the time a subscriber would say 'I know that' to which my standard response was "I know you know, that's not the issue, it's what have you done with that knowledge, because knowledge without action is the same as not knowing at all" - 99% of the time it was game, set and match to me.

On the basis of common sense is not all that common, we all need to be constantly reminded of the basics and specifically, are we applying them.

I always find your weekly newsletters very handy and of a quality rarely found, so I think you are being a tad hard on yourself.

39

mikebrisy
Added 4 months ago

Great points by @Lewis and @Solvetheriddle. I always treat the output from AI searches with suspicion in proportion to the importance of the search or the type of decision it support. I find it pretty reliable in summarising or extracting insights from a known dataset, like a set of annual reports or a suite of provided documents. But when I task it with work where it gets to chose its sources, I always personally check any sources which are critical to the key output. I do this in the way you would carefully check the work of a newly hired BA, who hadn’t proven themselves to you. Of course, in your prompts you can direct AI as to the source quality, and I find that a helpful thing to do sometimes.

Like you @Solvetheriddle, I’ve had to make the transition from well resourced organisations where you could access high quality information resources to being a sole operator. And yes, Mr Google has been a key resource for many years now. But with effort you can still often find quality information resources if you put the effort in. It’s usually a case of thinking carefully about what you really need and triangulating questionable info from multiple independent sources, and hopefully finding quality information resources if you look deeply enough.

The AI anecdote I shared showed how I wasn’t prepared to pass on a low quality source, or an AI summary on its own, without doing my research due diligence. But it was a "near miss" because initially I had honestly been duped, and in my initial verbal communication, unwittingly became part of the chain of misinformation.

It was a wake-up call because I take real money decisions based on the strength of my analysis. And as the old saying applies … GIGO.

As @Lewis rightly observes, generative AI is skilled at dressing up garbage and making it look like a quality product. So I will consciously redouble my efforts to take some of the capacity AI has given me to make sure I do a thorough QA/QC whenever it matters.

27

Slomo
Added 4 months ago

@mikebrisy, interesting take on using AI and I've always liked your BA analogy, this seems like a good way to frame it to me. It feels like the BA has become more eager to please over time so you need to really ask for the risks / negatives when looking broadly at a business.

The sources are of course critical so using constraints seems like a logical if not foolproof approach to controlling for garbage sources.

I’ve similarly been duped by sources that when I looked through to them found it was AI generated slop – in my case a page giving ASX research that I’d never heard of but looked good. Lucky for me they had mixed the attributes of 2 companies into one and I knew enough about the business being discussed to know some of the info was way off and must have been about a completely different business.

‘Closed’ tools like Google’s NotebookLM are good for this as it only uses sources (PDF, MP3, URL’s, etc) that you give it. ChatGPT has custom GPTs for this, Perplexity has ‘spaces’ or something like that, Gemini has Gems, etc.

It’s horses for courses – sometimes you want AI to go fishing with a net and you need to tune your BS meter accordingly. Other times you want spear fishing / precision in which case you might be treating AI like more of a search tool, or used closed models with only info you give it.

Another trick I’ve just started using is in ‘Open’ models like Gemini, ChatGPT or Perplexity, etc, prompt it with something like "Using only ASX announcements....and the attached..." that way you are constraining it to info from sources that should be reliable / controllable.

I also include at the end of each prompt for this sort of thing “Show references to source material wherever possible.” and “Take your time, I’m in no hurry.”

Not sure if the last one helps but worth a try in case there’s a trade off between getting it done quickly and getting it done accurately like with actual BA’s.

19

BigStrawbs70
Added 4 months ago

This is a great topic.

While we are calling out the errors that can and do occur when relying on AI research… how many folks (not saying anyone in this site, I’m referring to the investment community at large) have followed the advice of ‘experts’ and lost a material amount of money? Human advice is not always great, and therefore, DYOR on the source materials and everything else outlined in earlier posts is still very much needed. As Strawman has highlighted a few times, you can borrow the idea, but you can’t borrow the conviction!

On a somewhat related topic, I also can’t help but think about some of the commentary regarding self-driving cars and the subsequent discussions when they’re involved in an accident. I wonder if the pass mark is zero accidents, about the same as humans, or just fewer than humans? My initial thought is that if they are as good as humans, or even better, then that’s a pass mark. Zero accidents (aka mistakes) is not realistic or even need for the roads to be safer than they are now.

All of that is the long way of saying: There will always be a need to look under the covers, regardless of where the advice is coming from. After all, even Uncle Warren has been wrong a few times, so let's be careful not hold AI to a higher standard than we hold the humans...nor should we just assume it is always correct, not for now anyhow lol :)

16

Saasquatch
Added 3 months ago

My initial response to this is of fear and anxiety if a heavy reliance on these LLMs ensue, which day inevitably will because there's less friction to getting work done. There is severe risk that the US CIA and government will capture this opportunity as a means to coordinate masses create confusion and drive populace thought in order to keep themselves at the top of the pyramid of control as has been the case with the underlying monetary systems and the way they are coerced and manipulated globally.

5
SayWhatAgain
Added 8 months ago

Hi all, following on from the weekend convo, came across this on ABC’s Late nigh live

Yanis Varoufakis on Trump's shock plan for the global economy

https://www.abc.net.au/listen/programs/latenightlive/yanis-varoufakis-trump-tariffs/105180002?utm_content=link&utm_medium=content_shared

Economist Yanis Varoufakis says we shouldn't underestimate Trump - he has a plan to shock the global economy, force foreign countries to buy crypto-currency in return for a deal on tariffs, and ultimately pay down debt and keep the US dollar at the centre of the global financial system. But it's a high risk strategy that also paves the way for the "tech bros" to make a lot of money.

Cheers!

20

Strawman
Added 8 months ago

That was an interesting listen @SayWhatAgain

I think Yanis made a few factual errors (eg tether buys short dated, not long dated bonds, and I can't see how Japan would ditch US treasuries -- which pay a yield -- with USDT which is a zero yielding abstraction on the actual currency), but the thesis that Trump is deliberately trying to weaken the dollar is interesting.

Honestly, I get the strategic rationale of your goal is to reshore productive capacity. But it's just going to be a very long and expensive endeavour. Certainly could have been executed better, so far.

14

SayWhatAgain
Added 8 months ago

Certainly was @Strawman. I don't think he believes the Trumpian strategy will succeed, but he is right in warning not to underestimate what his regime is doing. His parallel to the Nixon shock was fitting, and his commentary that it is not the man but the team behind that we should not underestimate makes sense to me. I agree that his suggestion that Japan could be bullied into carrying out the dollar devaluation plan, potentially via crypto, sounds far-fetched...but it is provocative. His comments on China, I think, are spot on though...they do have a rapidly emerging digital infrastructure and if they get pushed too far, they will 'push the button' and still keep the trillions of US dollars they hold....and that is when we will see a change in the world order!

This is another (more pessimistic) take on Trump's plans by Satyajit Das (Ex-financier warns economic downturn comparable to Great Depression could be beginning abc.net.au). He also suggests they have a plan, but one that is somewhat more erratic and destructive. Both Y and S make the same overall point though, we are in the midst of significant, maybe structural realignments. What does it mean for us? he suggests that the effect of all this will be secondary, mostly because of our economic reliance on China. He suggests that the time will come when we'll have to choose. He quotes Kissinger and says that it is dangerous to be the US's enemy, but being their 'friend' is fatal. That may be true, but I don't know what the USA have done for us....and in a changing world order we should probably align with those in our region (?)....to be honest, personally I am not sure.....What concerns me, though, is how little strategic conversation there is on this from either side of politics. Our major parties seem adrift — uninspired, and uninspiring, more interested in grabbing headlines than showing much in the way of vision, let alone strategy....humphhhh!

18