Another cracking Weekly from @Strawman. Loved it so much I got my BA to spin up a handy EPS Diagnostic Checklist, which I will use.
I did this because even though, as I read the newsletter with my morning coffee, I reckoned I do quite a lot of it, I reflected that my process is a bit random - sometimes focusing on one or more elements while neglecting others.

Another excellent, informative and educational newsletter @Strawman - thanks.
It made me reflect on one of my best recent performers from a share price perspective - Energy One.
Here's what I found:
Underlying EPS up 58% in FY25 and a long term EPS CAGR of 22% since 2016 (share count CAGR of 6% during this period). EPS was depressed in FY23 and FY24 as EOL were investing in the business and costs were up.
Net debt decreased by $7.5m (down 53% on FY24) through improved cash earnings as well as a working capital benefit.
Cash balance doubled to $4m.
Trade and other receivables did increase to $3.8m (up 51% on FY24) put down to build up of project work and some custormers delaying payments into the next FY.
No buybacks.
My conclusion from this quick reflection is EOL are growing EPS rapidly while the balance sheet improves on the background of moderate share count growth.
On a rainy Brisbane afternoon, while I just finish a cup of tea before returning to some domestic duties, I read the Weekly @Strawman Newsletter. And it prompts me to share two reflections, which I combine in this one post, even though they are only partly related.
Reflection 1
There is great insight once again in @Strawman's Weekly. There's great value in repeating important insights.
As I get older (now in my 4th decade of business life), while I am perpetually in awe of the seemingly endless fruits of human innovation, it is equally true that many times when I read about the next "latest thing" it is simply a repackaging of ideas that have been around for aeons, or more precisely literally decades.
My areas of specialisation in business include strategy and markets, operations, supply chain, and project management. I have consulted on all and also taught MBAs. However, in each area, the most important things (and in particular the things that firms and leaders get wrong most often) tend to be the some oldest wisdoms and learnings. And time and again, they tend to be very practical, simple, often intuitive ideas.
So once again, the Weekly Letter really resonated with me. Given all the noise and messages we are exposed to each day, it is really good to have the enduring and potent messages (that we forget at our peril) reinforced.
So @Strawman - you will never have nothing to say!
Reflection 2
As I reflect on the amazing advances in AI, that continue to make me more productive in my research, I often wonder what the effect of all the "rubbish" that is out there in the digital world is having on AI effectiveness. This is tangetially related to my first point, because with so much AI-generated content entering the "digital universe" I believe that human critical thinking is becoming even more important, and not outsourceable to AI.
And so I'd like to share something that happened to me last week.
I was sitting in a meeting in a local organisation where I do some volunteer work, and the topic of nuclear energy came up. That morning on LinkedIn, I'd read a post about some recent advances in SMR developments. I enthusiatically added these breaking "insights" into the conversation. One of my colleagues asked me to send them the article after the meeting.
When I got home, I dutifully fired up LinkedIn and searched for the article I'd read that morning. As it didn't have any sources, I turned to ChatGPT4.0 and asked it to rather the sources for me, so that I could send a more authoritative article to my colleague.
Sure enough, my AI BA dutifully came back with a beautifully structured and well-written article about the latest technology I'd enquired about, even providing details of the economics, energy output, and plans by the Japanese Government to roll out the technology by 2030.
But then I looked at the sources. First, all the sources specificially relating to the technology were from blogposts, social media, Reddit, LinkedIn etc. The quality sources were general review articles about developments in SMR technology, but these didn't appear to reference the specific technology of interest.
A strange sensation developed in my gut. Something wasn't right.
As I clicked on each of the "low quality" sources, I found they were recycling the same "facts" with remakable consistency. The only thing that differentiated them was different images,... images that had that characteristic "AI-generated" feel about them. Some of these AI images were even shared between sourses that didn't reference each other, and there was no image credit.
I realised at that point, I'd been duped. (The elapsed time between starting my search and this lighblb going off, was only about 5 minutes, but it felt like an age.)
So now, I asked a balanced but probing question of ChatGPT4.0 "Deep Research" Mode, to investigate whether this breaking-news SMR technology was legitimate or not. And lo and beheld it generated a report setting out all the evidence and arguments why the story was "fake news". Something I'd already concluded with high certainty.
Red-faced, I emailed my colleague and reported that I had unwittingly passed on fake news. I said if he was interested in the state of SMR technology he could read the following link from a report from September 2024 by the IAEA.
When I went back to the original blogpost by a profilic US tech blogger (which seemed to be the primary source from which all the LinkedIn AI-generated fake posts had been spawned, written by multiple "content creators") I can see why it duped me. It had enough elements that were close enough to where the industry is, as to be plausible. I like to think of myself as not easy fooled, given that I've spent at least 20 years in one form or another working in the energy sector. But I had been fooled,... almost.
More and more of the digital content I am scrolling through online is clearly being automatically generated. In this case, I don't know how the tech blogger made the error he did. I don't believe it was a deliberate act of spreading false information, but rather an act of laziness, in generating a daily or weekly blogpost to maintain their place in the algorithm.
So, as the proportion of "garbage" in the digitial universe multiplies, and trains the LLMs, what does that mean for the effectiveness of AI? Clearly alogorithms can be written to QA/QC content, to error trap this kind of issue. But it is evidently not yet effective in the tools I am using.
What are others finding?
Hi all, following on from the weekend convo, came across this on ABC’s Late nigh live
Yanis Varoufakis on Trump's shock plan for the global economy
Economist Yanis Varoufakis says we shouldn't underestimate Trump - he has a plan to shock the global economy, force foreign countries to buy crypto-currency in return for a deal on tariffs, and ultimately pay down debt and keep the US dollar at the centre of the global financial system. But it's a high risk strategy that also paves the way for the "tech bros" to make a lot of money.
Cheers!