Forum Topics AI Hype
Slomo
Added a month ago

Another couple of interesting pods - from Thompson's Ben (Stratechery) and Derek (Plain English) discussing more bullish AI views with a nod to bearish and bubbly counterpoints.

https://stratechery.com/2026/agents-over-bubbles/

https://www.derekthompson.org/p/yes-ai-is-a-bubble-there-is-no-question

Both discuss the (potentially) game changing advent of agents.

Also value capture moving up the stack via orchestration layers and harness apps.

All makes sense to me and hard to argue - even harder to know how to apply these unfolding insights.

Still, all good fodder for the old brain box.

Taking a model training approach here - ingest a huge amount of data in the day, organise and marinate overnight, then see what pops out when the real world / markets / announcements prompt and prod me in the morning.

If I could just cut back on the hallucinations... I actually thought the US and Israel had started a war with Iran the other day - imagine that!

23

Strawman
Added a month ago

Will definitely check these out @Slomo

Having had a play around with agents, and still trying to build a custom one in my spare time, I'm pretty bullish on the idea.

Connecting a reasoning engine to a code execution layer has some wild implications. And while there are challenges it's amazing how well the process can work (you just have to spend a bit of time fine tuning these things). Even I've managed to get one to reliably execute a variety of command on request.

I can recommend the cloudflare Agent SDK kit if anyone's interested. If you point Claude code or Cursor AI at it, you can spin something pretty decent up quickly. And it'll teach you a load about how it all works (at least conceptually).

Have been absorbed in orchestration and delegation heirachy, prompt engineering and reasoning loops.. and while i am totally out of my depth, it's a lot of fun, and from an investing standpoint it's convinced me this stuff is very real, with very significant potential. Of course, at the same time, still a huge amount for hype, and I'm sure it'll take a long time to ripen, but it does feel like the world has changed.

22

Scott
Added 4 weeks ago

Large Tabular Models (LTM)

One of the hurdles for me in the use of LLM's in business is their non-deterministic nature. They have revolutionised the ability to examine and produce unstructured data (prose, computer code, graphics, etc) but the non-deterministic nature is an issue in many areas of business. Businesses are accountable for their output (banks, insurance, TNE, WTC, etc). LLM's are also not suited to tabular data, where most businesses hold their data. LLM's have been able to abstract the meaning of words across large volumes of data, but this can't be done for tabular data where the meaning of the data is only relevant to it's associated data (eg the value of 6 or a status of 'stopped' is different across databases whereas the meaning of the word "stopped" across English literature can be inferred).

LTM's are starting to emerge to solve these issues and others. They are deterministic, auditable and suited to tabular data.

Personally I think we are at the beginning of a long AI transition into business with a mixture of models, not just LLM's, being used. I suspect businesses will end up with a variety of domain (or use case) specific models in addition to use of LLMs. I also wouldn't be surprised if energy and data centre space constraints are a catalyst to these smaller, less resource intensive models. LTM's are also much less intensive than LLM's

LTM's are very new so there's no guarantees. https://fundamental.tech/ is one the first commercial offering.s

A couple of references.

https://towardsdatascience.com/tabular-foundation-models/

https://wangari.substack.com/p/large-tabular-models-are-here-are


23
BkrDzn
Added a month ago

Anyone thought about or concerned around the growing issues around AI platforms like chat GPT having growing problems with hallucinating answers? Something that appears to be a feature and not a bug of these systems. Across many platforms, I am seeing more people using whatever output is created in lieu of any real apparent analysis or understanding of an investment, seemingly taking whatever it says at face value. Or the output ends up being very superficial and far from usable analysis. The risk being many investors become too confident in weak theses. I think there are a couple users in here who don't post without a passthrough one of these platforms or exclusively generate straws with them.

26

thetjs
Added a month ago

I’ve been seeing a huge increase in this over the last year in professional e-mails.

There is most definitely a style and cadence of email that has been written by co-pilot or ChatGPT.

and when I see it now it really makes me question has the person even read the initial correspondence or just cut and pasted through the response to keep on moving to the next task.

Its also shocking to think that this approach is going to fundamentally change how we communicate (ie what words are used, structure of sentences) as a species.

22

mikebrisy
Added a month ago

@BkrDzn I think this is a real issue. I am not sure that this is about "hallucinating answers", maybe it is. (And yes, I am forever having to send my AI back to the drawing board because of errors ... and so you have to always ask yourself, what errors have I failed to detect?)

Just because I have a saw, a hammer, a ruler, and a drill doesn't make me a carpenter!

Recently I had a go at updating my $PNV valuation. (Published here, so I don't need to repeat).

As I have noticed increasing posts by StrawPeople using LLMs to do their valuations, I thought I'd give it a go to compare with my own work. I drafted an identical, detailed prompt and gave it to ChatGPT and also to Claude. My prompt asked for DCF methodology with an 8 year explicit period. I specified the continuing value growth (3.5%), zero debt. I asked that it triangulate the valuation with a EV/EBITDA and P/E industry benchmarks checks in the terminal year.

I didn't save the outputs, so this next bit is from memory - and might not be precisely right, but it is directionally correct.

ChatGPT ended up taking quite a bearish view, and generated valuations ranging from about $0.50 to $1.10, with an expected value of around $0.70.

Claude was suprisingly bullish, generating a range from $1.30 up to over $1.80 (I think), with an expected value of around $1.60.

What was interesting to me was that these are supposed to be ranges that span likely outcomes, yet the two LLMs delivered non-overlapping ranges, which means that at high likelihood (80%?) one is right and the other is wrong. So which was right and which was wrong? I have no way of knowing.

And I think this comes back to your point. We can all bang numbers in a spreadsheet, be it a DCF or financial projection or McNiven or any other method you like ,... It doesn't matter.

What matters is whether the values that go into the calculation bear any resemblance to how the future will pan out. I use both DCF and terminal P/E methods, but before I commit to a valuation, I look at the projections in every scenario and every year and ask myself "is this a reasonable representation for how this business might be managed and might perform in its competitive environment in year X"? That takes quite a bit of work. Even updating a valuation in a company I know very well can take a solid 3-4 hours work. (Actually, sometimes I do a quick val or a quick update. In those cases I always use the qualifier "placeholder" which means I need to go back at a future time to do the detailed work.)

But for me the work is the whole point. The work generates my understanding. The numerical assumptions are an expression of what I consider the business will be doing in its market (revenues) in a competitive context, and how it will operationally deliver those outcomes (via margins, operating efficiency/leverage and reinvestment).

My valuations are little more than an a fact-based and analysis-based belief about the future.

If someone else does the modelling and gives me the result, that is of no use to me. Because without doing the work, I don't get the understanding. Its not about agreeing or disagreeing with their numbers and assumptions. Rather, it is through analysing the company and its competitors that I generate the understanding to be able to make the assumptions in the first place.

Don't get me wrong - I use AI a lot. But like an apprentice carpenter, I am slowly learning what tasks need a saw, a hammer, a ruler and a drill. That requires me to understand each task and then to use the right tool in the right way for that task.

So, for the foreesable future, my valuations will continue to be based on my flawed, labour intensive and arcane process! But at least I'll understand what they represent and, as far as I am concerned, that's all that matters.

35

Slomo
Added a month ago

There's definitely something to this @thetjs.

A bit like one of the original use cases for the www was suppose to be recipes(!?!)

It seems an early use case for AI was to turn a couple of sentences into a few smart sounding paragraphs you could e-mail to your boss.

Closely followed by by the ability of your boss using AI to turn your paragraphs into a few sentences so he could read it quickly.

Similar to the ability of AI to turn your CV into a world beating application and the AI bots on the other end to read through the guff to see if a decision maker wants to see you.

Heard a good quote on a pod the other day - "A fool with a tool is still a fool".

Prompt engineering is a key part of literacy for how to use AI tools.

Although that seems to have been replaced by context engineering now.

Worth staying up to date on this highly evolving space.

Feels like we are still at the start of all this...

16
Solvetheriddle
Added a month ago

another uesful podcast on how software co's are attempting to adjust to AI. this interview with CEO of TEAM, MCB "the Double Bay Jesus' was quite illuminating. of course he is CEO of a large s/w co , so bear that in mind. i think the pieces are falling into place, which may be famous last words. as ive said before, i have learned more about the software industry in the last few months than any time before.

im a buyer of the apocalypse, my hope is im getting on the right horses.


https://www.youtube.com/watch?v=0lzo2tFBFy8

17

Slomo
Added a month ago

Thanks for the share @Solvetheriddle, good insights from the coalface, wise to calibrate for inherent biases.

Here's another recent a16z preso I found useful on moats, etc in the age of AI with case studies.

The TLDW is at the 30 min mark - "Moats matter more than ever because you're able to create software more readily"

https://www.youtube.com/watch?v=3XVDtPU8xKE

Also includes a discussion of brown vs green field opportunities. Essentially another take on the AI eating services (AI bull case) theme I've been hearing more about recently.

And a discussion of data moats / walled gardens.

More detail on their walled gardens views here - "Fruits of the Walled Garden"

https://a16z.com/fruits-of-the-walled-garden/

Worth noting the a16z host Alex Rampell is their HO AI Apps so pitching hard.

12
Slomo
Added 2 months ago

Another update from Cal Newport, looks like he'll be doing more fact checks like this over time.

https://www.youtube.com/watch?v=JRayjrpX10k

Includes a rebuttal of the Block (XYZ:ASX) headcount cuts due to AI - AI being used as cover for FTE cuts as has been argued elsewhere (in this case due to over hiring in covid and M&A since).

Similar story for WTC, I expect.

Just enough truth in this for it to be believable but AI is the best spin to put on it, and the share price pops for XYZ and WTC suggest it worked.

Beyond that a nice assessment of what AI really enables, practical applications and some implications from that.

11