Forum Topics USING NOTEBOOKLM AS A USEFUL INVESTMENT DECISION-MAKING SOURCE
Slomo
Added 2 months ago

Agree with @Solvetheriddle and @mikebrisy on this, I’m a big fan of NotebookLM (NLM) and probably use it more than any other AI tool.

It’s a ‘grounded’ model so it only runs off the data that you give it – not whatever is on the internet like typical (open) LLM’s.

It’s great for consolidating and synthesising a lot of information.

Example – load a few years of call transcripts and ASX presos and annual reports, then ask it where management has changed their strategy, messaging or KPI’s, hit or missed targets they set for themselves, etc, etc.

It’s also multi modal in both directions.

So you can input PDF, audio recordings, URL’s, etc, etc.

You can then output video, audio, slides, infographics, reports, etc, etc.

You can just generate these as default outputs but I think you need to customise them for 2 reasons. 1) To set the tone, or it’s just to light, confirmatory, hokey, American for me. 2) to fit your purpose.

I tend to use standardised prompts like:

I am looking for a careful, critical, balanced analysis including pros and cons, and a weighted assessment of these, not a glowing sales pitch.

Keep it professional, to the point, objective, incisive, analytical and evidence-based, focusing on practical application of the main concepts, frameworks, models and insights.

I customise all audio and video outputs with this at least.

I don’t need to use the standard LLM prompts like “Don’t make anything up, take your time, think through this deeply and logically, etc, etc.”

I find the audio (podcast) output to be particularly useful. Comes in 3 sizes short (5 mins) default (15 mins), long (40mins).

You can even interact with it – when it says something interesting you can pause and ask it to say more about that topic or challenge it.

Or you can download the long version ask it to generate a transcript and then listen at 1.5-2x while reading along.

It’s like any AI tool, the more you use it and iterate how you interact, the more you get to understand it and improve the value you get from it.

I’m on the paid version (outstanding value) but I think you can use a free version too.

Enjoy!

27

Solvetheriddle
Added a month ago

@Slomo well said, thoughtful guided prompts make a big difference. have to try the output media

8
Solvetheriddle
Added 2 months ago

USING NOTEBOOKLM AS A USEFUL INVESTMENT DECISION-MAKING SOURCE

The following may be old news for some, but it may be useful for others. The following is my journey. Must thank @Slomo for the heads up here. Over the past while, I have been building a research library on NotebookLM (NLM). NLM is part of the Google stable, and I think it is very well-suited for research.

When I moved from professional investor to retail punter, one of my concerns was the quality of input information. I was truly shocked at the apparent extent to which people relied on social media feeds from unknown and/or untried sources. Something I was quite reticent to do. Taking a Google search, or a Twitter comment, etc., as a source of truth for a large investment decision is quite a hurdle for me (still is).

So what's going on here?

What NLM allows you to do is set up your source of truth by allowing only the information you have vetted into the library. That allows you to handpick the stuff you want the commentary output based on. For instance, I have spent the last couple of weeks filling hundreds of documents into my library. They are mainly result call transcripts, company presentations, and research that I thought were particularly insightful. Of course, the produce is to a large extent dependent on the inputs. I also have my financial models as part of the library. One issue I have found is that some research is protected; maybe we need a workaround here.

NLM is arranged by individual notebooks, i.e., they don’t talk to each other unless you summarise from one and paste into another, which you can do. Below is part of my current notebooks. Most of these have enormous amounts of content. I have settled on industry notebooks, thinking that there is some utility in that without being too narrow or too widely focused. My “Models notebook” below is a general notebook where I can ask it to look across all my models spreadsheets for trends or screening. One goal I have is to coordinate my investment philosophy and portfolio with the models and other data to critique my portfolio for consistency. There is a wide range of possibilities to use this structure.


A screenshot of a phone  AI-generated content may be incorrect.

What has become apparent to me is the value and skill involved in questioning the data source or prompting. Various sites can help with improving your skills here, and how to ask the best questions, which matters. Asking simple questions can get simple answers. I have also found that AI is not that intuitive, and outlining the scope and nature of your query helps. thinking about my past interactions with various analysts over the years, maybe they weren’t too intuitive either, lol. Eg, telling NLM to be sceptical and search for changes in narrative over time from management is interesting. It is worthwhile spending a bit of time thinking about what you want and how to get it. I suspect you can improve a lot with practice.

The other big positive I see from this is the reduction in wasted time and effort. We only have so much energy and time to put into something, and I have found much of my time and effort gets consumed in collecting, collating and coordinating data, or going over reams of stuff looking for one gem. NLM has the potential to turn this inefficiency on its head; instead of 80% of the time being used searching, collecting and organising, I can spend 80% of the time analysing and thinking about responses, a big improvement.

Google has now combined NLM with Gemini, its LLM. I have just started to use this feature, and it brings the wider web into the database. That can be useful for specific issues, although im just starting to scratch the surface here. It brings much wider content, but at a lower compliance or quality check. You can easily think of possibilities of its use.

Properly used, NLM is like having a trained junior analyst. In fact, that’s what I named the Gemini “Gem” my trusted junior analyst. There is likely a long way to go here, and as Slomo says, better to be on the train early.

I can already see that I will spend more time talking to NLM and less time on other sources.

All the best.



34

mikebrisy
Added 2 months ago

@Solvetheriddle I have also been getting a lot of value by building libraries in NLM, however, not quite in the systematic way you have.

There is a real benefit in understanding what the input dataset is to an LLM work product. That is because otherwise the inputs can be all kind of whacky sources.

One think I am finding, however, is that the LLM's across the board make errors in reading PDF sources. In my experience, these errors quite often occur with numerical data. So, increasingly, I am having to spend a lot of time error-checking and then correcting AI work products.

I've previously described using LLMs as being like my earlier days as a management consultant project manager when I'd have super-talented business analysts working for me, but who had no real world experience with which they could sense check their own analysis.

I am getting to the point with my analytical work with LLMs where I feel like going to the staffing team and saying: "can you please change out BA "X" on my current team, because they just make too many errors and don't check their own work, despite my coaching them".

What experience are you (and others) having with LLMs in quantative data handling and error generation? (Beyond dodgey sources)

More and more I am finding LLMs very good at textual, qualitative analysis and summarising. But even here, the synthesis process can be flawed and the LLM can tell a confident "story" that belies the shakiness of the underlying analysis. One thing I am doing more of on important analysis is getting one LLM (e.g. ChatGPT) to develop a draft work product. Once I am reasonably OK with it (so usually after several iterations to fixed obvious flaws), I feed it into another LLM (e.g. Claude) and request a "critical analysis" of the work. I then ask the second LLM to redraft the work product addressing all the limitations identified.

I guess the point of this is, the more I am using LLMs, the more I am becoming sensitive to the errors and limitations, and recognising the need to develop explicit strategies to counter this.

The more I am using LLMs the more I am forming the view that the world is becoming filled with people who use AI to do their work, many (most?) of whom are as a result losing their own critical reasoning and analysis skills. For example, my LinkedIn feed is flooded with "Inflencers" who are sharing "AI best practices", but which I suspect are pretty meaningless products supported by little if any research.

I wonder how much of the potential value-add of AI is going to be undermined by the (apparent) fact that many of the people using it are going to become progressively more stupid over time as they depend more on AI to do the thinking, while their own minds atrophy?

An analogy would be the US economy. While income per capita has outstripped many developed nations over decades, the country has gorged itself on junk food and drugs so that measures of healthcare (morbidity measures; obesity; infant mortality; life expectancy etc.) are underperforming less "affluent" nations.

21

Solvetheriddle
Added 2 months ago

@mikebrisy interesting, and i have found some numeric errors. if i see something that doesnt sit well with me i will double-check it. However, i am using it more on the qualitative side. NLM does cross-reference, but it is worth checking it and asking it to clarify or provide quotes to support the conclusion.

the self confidence i find annoying, but that's life, i think you can change the tone. i also ask it to be sceptical, which it useful. So the major aid is covering a lot of ground (and i mean a lot) quickly, that leaves me fresh and still enthusiastic.

A common use for me is asking it to identify, summarise, and analyse questions in results Q&A that address a certain issue. Of course, i cnat be sure it is 100% accurate in this coverage, but it seems quite good.

You are right i call it my junior anayst so it can't be taken completely on face value, and it may not fully understand the context at times, but it does save me enormous time and energy.

just thinking as i am writing, it probably helps if you have a working knowledge of the subject, rather than being completely cold, and that suits me. Someone pointed that out to me, and i think they are right.

anyway im persevering.


25

reddogaustin
Added 2 months ago

@mikebrisy your experience is accurate. LLMs are used for words and unstructured data. Whats next is LTMs, large table models, ai designed to read structured data or in your example numbers in tables.

That's my understanding anyway and why our BA's are not the best in math class.

edit; correction from phone typing pre-coffee

18