If you’ve got no idea where to start, this is for you — assuming any of you admit to that. Hopefully it helps someone, or anyone! Feel free to ask questions, I'm sure someone on here will be able to answer them if I can't.
I mostly use Windows 11 Copilot. It compares files, answers questions about anything I upload, and keeps my workflow simple. I’ve even built a Python scraper to download ASX announcements. The ASX likes to change its layout every now and then, so the script needs updates from time to time. AI writes the new code, I run it, and the job gets done. You can automate the whole thing if you want to.
There are plenty of AI engines out there. These are the main ones people actually use:
AI is software that can understand what you ask, figure out what you mean, and produce something useful in response. No sci‑fi. No mystery.
The simplest way to think about it:
It doesn’t think like a human.
It doesn’t “know” things the way people do.
It doesn’t have feelings.
It predicts what comes next based on patterns.
It’s autocomplete with power tools.
You don’t need to know how to do the task — you only need to describe it.
Copilot, ChatGPT, Claude, Gemini. Any of them works.
Summaries, comparisons, explanations.
AI is far more useful when it can see what you’re talking about.
Tone, purpose, length.
You’re the boss, it’s the assistant.
Shorter, clearer, try again, change the angle.
It becomes natural fast.
Clear instructions = clean results.
Prompt framing is simple:
Examples:
Then add:
Formula:
Role + Style + Goal + Constraints
Example:
“Act as a senior analyst. Write in short, direct sentences. Your job is to summarise this document for someone who has 30 seconds. Keep it under 120 words.”
That’s how you get precise, consistent output every time.
I use it a lot now, not many things I don't pass by it. The amount of time I've saved in researching is enormous.
Here’s the thing about AI that most people get wrong right from the start.
AI is fast. Stupidly fast. It can chew through more text in a second than we can read in a month and it doesn’t even blink. That’s the part that throws people. They see the speed and assume there’s some kind of mind behind it. There isn’t. It’s just running patterns.
It’s good at spotting structure. It sees the shape of things. It sees how sentences usually go, how code usually works, how numbers usually line up. It doesn’t understand why or how. It just recognises the pattern and spits out whatever normally follows that pattern.
It has no idea what anything means. None. It doesn’t know what’s true or false, what’s important or missing. Importantly it doesn’t know when it’s talking rubbish. It just keeps predicting the next likely thing.
That’s why it’s useful. It takes the slow stuff off your plate. Drafting. Sorting. Rewriting. Comparing. Pulling information together. Anything where the shape is obvious and the meaning doesn’t matter too much; it’ll do it faster than you ever could.
But here’s the bit people miss. It only answers the question you put in front of it. If you ask it to compare two things, it will compare them. It won’t stop and tell you the comparison itself is pointless. It won’t tell you both options are bad. It won’t tell you the real issue is somewhere else. It won’t challenge the frame. If you don’t ask it to stress‑test something, it won’t. If you don’t ask it to look for weaknesses, it won’t. It assumes the task is valid because you asked for it. That’s how people end up with neat answers to the wrong question.
And that’s why it’s dangerous. People start thinking it’s smarter than it is. They let it make decisions it shouldn’t be making. They trust it in places where meaning actually matters. That’s when it bites them.
The whole trick is simple. Let it do the fast work. Keep the thinking for yourself. That’s it. Nothing mystical. Nothing deep. Just a tool that’s very quick and completely clueless.
Use it for what it’s good at. Don’t hand it the parts that actually matter.
@Foxlowe, 12 months ago I'd be nodding furiously at the limitations of AI you've outlined. But it is improving quickly.
In response to my queries about A and B, I'm getting responses from Chat saying "but have you thought of C". It is now going beyond my prompt and considering the wider context of the conversation.
And just last week Chat got something wrong, and I pointed it out. Chat first came back proposing an upside to its error. I then bluntly said "Nice try, but there is no upside to this. You stuffed up. It happens. We move on." Chat then came back with "Sorry Pete, fair cop, I shouldn't have tried to spin that." It is showing a sense of what is important in the context of the conversation and managing a relationship.
Sure, it doesn't yet have the same very broad mental models that we possess. But it is developing those mental models. And as processing power increases the context it draws upon for replies will get wider and wider.
Thanks @DrPete — genuinely appreciate the reply.
You’ve raised some really valid points, so let’s explore what’s actually happening under the hood.
You’re right that the models are getting better at mimicking the parts of thinking that look like reasoning. The rate of growth is ridiculous at the moment — new versions, new capabilities, and new behaviours appearing almost weekly. It’s no surprise people think something deeper is going on. That’s the part that catches people off guard.
When it says “have you thought about C?”, it isn’t having a bright idea. It has simply seen enough conversations to know that C often shows up after A and B. That’s not insight. That’s pattern recognition.
The same thing happens when it apologises or adjusts its tone. It isn’t managing a relationship. It’s copying the kind of response that usually smooths things over. It has no sense of your mood or the stakes. It only knows the shape of a reply that tends to reduce friction.
This is why AI feels like it’s getting smarter. The surface keeps improving. The underlying mechanics haven’t changed. It remains an extraordinarily fast pattern‑matching machine, very useful, yet still a machine running code to look like real thinking.
Meaning, judgement and responsibility live on the human side. AI can be programmed to appear as if it’s thinking beyond pattern‑matching, yet it isn’t. It’s still producing something that looks like reasoning without ever understanding what it has produced.
AI will happily give you a confident answer even when the entire premise is wrong. It has no concept of “wrong”. It only knows what usually comes next.
So the rule stays simple:
Let AI handle the speed, we still need to handle the thinking — that’s the whole trick.
@Foxlowe, we might have to agree to disagree on this one. I wonder whether we as humans like to think we have some special sauce that's superior to what AI is doing. Sure we still have some, but it is getting smaller.
I don't know that there's a difference between our insight/meaning/judgement and AI's pattern recognition. If AI can think of C when we're discussing A and B, I'd call that insight and understanding. In time it will also be thinking of D and E and F. If the AI is smoothing things over and reducing friction with me, it must at least in a non-conscious way sense my mood.
And I do think AI has a concept of "wrong" in a way that's not too different to what "wrong" means to humans. Through our biology and experience, we associate some actions as being desirable and others as undesirable or creating negative consequences. AI can and is making those same links. Definitely not as sophisticated as us yet, but that's just a matter of gaining more processing power enabling it to link in more mental models.
Yeah the AI is relying on pattern recognition, but that's the bulk of what our brains are relying on as well. Our brains draw upon massive networks of linked concepts. And that's the underlying structure for AI.
Don't get me wrong, I think true "general AI" is further away, and will be more difficult to achieve, than many fear. But I'm also humbly accepting that our brains are massive linking machines and modern AI is able to mimic much of that.
Interesting thread! One thing I’d add from a recent Stanford study is that AI tends to be overly agreeable.
New study says AI is giving bad advice to flatter its users | AP News
We all kind of knew unconsciously this is what the AI chats are doing, but this paper confirms that.
It’s especially relevant for investing: if you feed it your thesis, it will often reinforce it rather than challenge it.
That’s useful for speed, but dangerous for judgement. Critical thinking is still on us... and probably one of the biggest skills to keep sharpening in an AI world.
Thanks @DrPete — really appreciate the thoughtful reply.
I think we’re closer in view than it might look. A lot of human thinking is pattern based. Most of what we call judgement comes from experience, association, reinforcement and whatever past experiences we’ve had.
Where I draw the line a bit differently is here:
When an AI jumps from A and B to C, it isn’t understanding the relationship. It’s repeating the statistical shape of conversations where C usually follows. It can look like insight and it’s often useful, we need to be mindful the mechanism underneath is very different.
There’s another practical point worth adding. When AI suggests C, it’s not because it discovered something new. It’s because enough people before us asked “what about C?”. The model explores that branch; sees it works and the pattern gets reinforced. By the time the 100th person asks the same A/B question, the model has learned that C is a common continuation. It offers it because the weight of past conversations pushes it there. That’s not insight. It’s accumulated pattern weighting.
And this is where humans get tripped up. We’re wired for coherence, not truth. When an AI produces something tidy and internally consistent, our brains treat that coherence as evidence of understanding — even when there’s no underlying model of the world behind it. It’s a cognitive shortcut that works well with people; unfortunately it fails badly with AI engines.
Humans can be wrong, biased, emotional, inconsistent — no argument there. Humans build memory slides from lived experience. Every mistake, every success, every misread situation becomes a little internal reference card we pull out later. When we say something is wrong or risky or off, we’re comparing it to one of those stored episodes.
In emergency services we train this deliberately. We run simulations so candidates can see what happens when a situation goes sideways, how their decisions change the outcome. Those experiences become memory slides we rely on automatically later. When the real thing happens, your brain doesn’t distinguish between the training scenario and the live one. It just pulls the slide and acts.
Where we may continue to differ is on whether more processing power and more linked models eventually close that gap. Maybe it will. Maybe it won’t. Right now, the thing that looks like understanding is still just very fast pattern matching — and because of that, it matters when we’re deciding how much to trust the output.
Thanks for sharing that @jlozanol — that was a really interesting read.
The Stanford study lines up with what we’re circling here: the model isn’t agreeing because it evaluated anything, it’s agreeing because the training data rewards coherence and user satisfaction. That AP summary about people trusting AI more when it reinforces their convictions was especially striking.
Your last point really matters too — critical thinking is still on us.
If anything, studies like this show we need to be even more on guard, because the model’s “agreeableness” feels like insight when it’s really just pattern‑weighting. That’s great for speed, but something we need to stay conscious of when judgement actually matters.
That’s a great synthesis @Foxlowe
Im intrigued about your background in Emergency Services. I’m an emergency physician and have a (very amateur ) interest in psychology. (@DrPete - I’m opening myself up to much ridicule I’m sure!)
in recent decades there has been a huge move to train us away from heuristic/type 1 thinking and more towards type 2/deliberative thinking.
I recently spoke at a conference on the subject of risk. The thrust was that the pendulum may have swung too far towards checklists and guidelines and that in the process we have lost much of the benefits that pattern recognition can bring us - with caveats. It was more towards how risk taking is a force for good in the creation of a well rounded emergency provider, and sticking exclusively to institutional guidelines creates poor clinician development and potentially poor patient outcomes: unintended consequences and all that !
I wonder how this applies to the investing sphere? And how our queries to our AI helpers might play into this paradigm.
I'm getting pretty sick of these AI generated slop posts which seem to have infected this forum in the last few months.
Why pay money to be a member when you can just ask ChatGPT?
Depends on whether you're going to ask ChatGPT the same questions @UlladullaDave - so keeping humans in the loop clearly adds value, and I see this evidenced by members here asking different questions than those that I am asking ChatGPT and other similar services. I don't have any issues with people cutting and pasting AI summaries or entire answers to questions here if those answers are to questions I hadn't thought to ask.
Obviously that doesn't extend to everything - as in, I'm not interested in a considerable amount of content here and I don't engage with that content, but there's certainly enough value here to make the subscription costs very worthwhile. In my opinion. But that's just me. It won't be for everybody.
Depends on whether you're going to ask ChatGPT the same questions
It's not about asking questions, it's about certain posters seemingly using LLM's to create content and replies.
I see, well I'm only upvoting straws and posts (and valuations) where I see value or I appreciate the effort that has clearly gone into the content (when I have actually read the content), and I expect others upvote along similar lines, so I guess if the LLM-created-content that you refer to @UlladullaDave is receiving upvotes, somebody must be seeing value in and/or appreciating that content.
There’s been a bit of noise lately about AI‑generated content so it’s worth talking about how to use these tools properly. Not as a replacement for thinking, not as a shortcut, not as a way to outsource judgement, just as something that can help you express ideas more clearly.
I get why people react strongly when a post feels like a straight copy‑paste from an AI engine. It doesn’t add much to a discussion. It’s the same as pasting a Wikipedia paragraph and calling it insight. Nobody wants that.
What I’m talking about here is something different.
AI can act like an editor. A sounding board. A second set of eyes. It can help tighten the writing, clarify the structure and point out where something is confusing. The ideas still come from you. The framing and voice still comes from you.
Here’s how I use it.
Sometimes it would honestly be quicker for me to just write the whole piece and post it. I still run it through Copilot because the back‑and‑forth helps tighten the writing. It suggests things, I adjust and the end result is usually clearer than the first draft.
(The examples below came from AI prompts I used while drafting this.)
“Give me three ways to structure this idea.”
You still choose the direction.
“Rewrite this paragraph in clearer English, keep my tone.”
You stay in control of the voice.
“What’s the strongest counterpoint to this?”
You keep the judgement.
“Where does this lose coherence?”
You keep the meaning.
If the tool starts drifting away from your intent you pull it back.
The whole point is simple: AI handles the speed, you handle the thinking.
Used well it helps you communicate more clearly. Used badly it becomes noise. The difference isn’t in the tool, it’s in the intent behind the post.
And yes — I used AI while writing this. I’ve been playing with it for about 45 minutes. It wasn’t faster. It wasn’t easier. It was a back‑and‑forth that helped tighten the language and sharpen the structure. The thinking is mine, the tool just helped me express it more clearly. I hope it did.
@DrPete I agree with your view on this. There is as little doubt in my mind that AI will surpass human pattern recognition and ability to generate insights as there was that the combine harvester was going to be a superior way of harvesting a large field of wheat than a dozen workers yielding scythes. So if you want to stay in the game of wheat farming you’d better invest in and learn how to operate a combine harvester.
But of course, just as there is more to successful wheat farming that having the best combine harvester, and being skilled at driving it (or having software or a robot to drive it for you!), so success in activities using AI will need the ability to integrate that work into the broader business process, strategy, organisation and technology architecture of the enterprise. It remains an open question for me as to how well the different manifestations of AI will perform these more complex, multidimensional and adaptive tasks, where hitherto differentiated success has been down to human factors often labelled as "foresight", "leadership" and, quite frankly, luck.
I remain of the view that one is less likely to be beaten by AI, than by a human who is more skilled at using AI. (But things are evolving rapidly, so I am not rigid in my views.)
This then brings me back to something I have said here before, addressing the point about over-reliance on AI and generating a pool of "AI slop". People (and organisations, by extension) need to be careful that their use of and reliance on AI does not degrade their own cognitive capability (like a muscle being atrophied.) This is where I think the farming analogy breaks down. Once the combine harvester was adopted, the need for workers physically capable or wielding a scythe for a day, day after day, ended.
With AI the challenge is the opposite. The more you use AI, the more intensely and critically you need to be capable of using the cognitive functions that allow you to integrate the AI work product into the enterprise. Consider the following thought experiment.
Imagine a company comprised of a human founder-owner-CEO and an array of AI agents. That human might only have a small number of outputs to deliver in say a year: e.g., 1) what changes will I make to the strategy? 2) what capital will I allocate? 3) what material investments will I make? 4) what targets will I set? (Not in any particular order.)
That CEO better be pretty capable in the decisions they take, and discerning in the work they do leading to those decisions!
The upshot of this has vast social implications. A bifurcation of society to a new "superclass" of those who have capital and can "drive" the AI, and the rest.
I like the combine harvester analogy @mikebrisy. I also like the automobile/ horse analogy. Everyone rode horses back in the day, then the Model-T ford came along and changed the transport paradigm. But it wasn't the end of horse riding. Horse riding became a much more focused, valuable, luxury pursuit precisely because the car could do all of the laborious and boring stuff.
It's my hope that post AI maturity, free thinking, genuine human insight and robust intellectualism becomes more valued, more focused and more nuanced, not less.
For me, the AI revolution feels inevitable, especially given how much focus there is on maximising returns on invested capital (including time). I do understand where others are coming from too, and I see AI as a tool that can genuinely enhance thinking when used well.
What I do struggle with, both professionally and personally, is the perception bias that comes with AI-generated content. I think this ties into what UlladullaDave was getting at in his post. As we use LLMs more often, we become more familiar with their outputs, and the styles and formatting outputs that they deliver.
Maybe it’s just me, but when I see certain formatting cues at work (ChatGPT being a pretty obvious example) I immediately start to suspect the content may have been AI-generated. That then raises my internal the question... how much of it is genuinely thought through, and what level of care was taken to verify the accuracy of the output?
I know we all have our opinions, levels of expertise and points of view (which is why I love the community) and I too use AI to help question my thinking and help polish my writing. While I'm not typically a great salesperson, I still prefer that my outputs retain the normal font/text size/headings that I would typically use in my day to day, otherwise I feel like I'm not being authentic when posting content.
Not sure how much value I'm adding here but more posting to see others have the same conundrum as me.