Chat GPT and Salmon Farm Analysis - starting to see some light at the end of the tunnel
A few weeks ago, I posted a bitter screed about my frustrations with ChatGPT – specifically with my attempts to develop a custom GPT focused on analyzing financial reports from publicly listed salmon farming companies. I’ve been using ChatGPT for a while now and my goal was to develop a system for reviewing and comparing key details between farming companies - things like profitability, forward price expectations, biomass costs and liquidity. There is certainly a dose of morbid curiosity in this effort but also a genuine need to remain current and informed given my line of work.
The learning curve for working with a large language model, like ChatGPT is steep – I can see where the skills associated with developing system prompts will become critical for success in coming years. If a user approaches interactions with the model in the same way that they would use a Google search engine, the results will be just as reliable. A search might give you the answer you want but it will likely be buried in a mountain of best guesses. My initial efforts were along these lines - I recall asking it for information on why Greenland didn’t have a salmon farming sector like other countries in the region and was dismayed by the nonsense it spewed in response to my query. As I’ve stuck with it, and learned more about how these models operate, I’ve learned the “garbage-in, garbage-out”” rule applies - in many respects a crap result is the result of a poorly formatted prompt.
Liquidity and covenant compliance analysis
I’m starting to find my feet with the system. Here's an analysis provided on the liquidity risk and banking covenant compliance for Salmones Camanchaca in Q4 2024:
Make it stand out
Whatever it is, the way you tell your story online can make all the difference.
Interestingly, I had instructed the system to provide the results as a table – which it didn’t do on the first pass. It’s the kind of thing that can be frustrating but if you remind the system with a sample you like, it can regenerate a response more to your liking in seconds.
Customizing ChatGPT
Training the system to conduct this kind of analysis in a consistent manner required some patience and persistence. The interface is pretty slick.
On the left-hand screen, you interact with the customization screen to refine system instructions and tell it what you want it to do. On the right-hand screen, you can interact with the Custom GPT to see if your instructions produce the desired results. The customization screen can review your interactions with the system and make recommendations for adjustments to workflows. I know I am only scratching the surface on the system’s capabilities – there are many layers of complexity beneath what I am trying to do – coding, interaction with customer interfaces, data capture etc.
System limitations:
1) It has no long-term memory. ChatGPT is a bit like that guy from the film Memento, he has no short-term memory and needs to leave himself messages in the form of Post-it notes and tattoos. Though it has access to vast knowledge, it cannot remember the interactions you had yesterday.
2) I don’t really understand why this is the case but, unless it is given very precise instructions, it almost never produces the identical results twice. In the liquidity example above, I spent hours clarifying instructions and the format I was looking for in the results. I’m generally happy with the response to my request but I had to prod it for what I was after. The upside is that the system is endlessly patient – if, during a session, you don’t get the type of response you want, you can give it a saved sample of a format/presentation you do like, and it will follow the example.
3) It does a poor job of reviewing multiple documents for key data. I had been developing a workflow designed to extract and compare forward price assumptions from the fair value adjustment section in the notes to the financial statements for a variety of companies. Despite very specific instructions on where to find the information, it consistently produced inaccurate or incomplete information. I think the system is focused on the speed of response as one of its prime directives. If it is reviewing several documents, it will look for the first mention of “forward prices”, likely in the commentary section of the report.
4) If you are not careful, it can pump out a lot of crap information. In an ideal world, it would be nice to delegate a full review of a quarterly statement to an AI system and have confidence that it was always providing correct information, but a user needs to be skeptical and verify key information. Here’s an example of the kind of thing that can happen. In this example, it did not list Masovol even though I had uploaded the document.
Tips for using the system productively
1) Invest time in developing workflows and specifying the quality of information, the importance of accuracy, formats, key variables etc. If you get a result, you are happy with, save a copy and use it to remind the system of what you are looking for. It can be a slow process involving many iterations but it’s good training for both you and the system.
2) Focus on one report at a time. It can do an excellent job of extracting data from a single report – I’m probably not thinking about this correctly, but I feel like the system has a budget of 30 seconds to produce a result, if it spends that time focusing on one document, it is much more likely to produce an accurate result. If it must divide those 30 seconds among multiple documents, it will find the first result that looks like it might be an answer and return that.
3) Take the time to review the information it returns with a critical eye – the system can help with this. For example, ask where it found information on the company’s cash position. It should be able to provide an exact location in the document and then you should take a few minutes to confirm. This step may also identify unacceptable shortcuts the system is using to generate information. In my system, I have told the system to focus on retrieving information from the financial statements and notes to the financial statements and to ignore the management commentary at the beginning of the document. One of my other workflows looks at forward price assumptions used to generate fair market value adjustments. Generally, these are presented in detail in the notes to the financial statements. If the Custom GPT returns limited information or a high-level summary, I know that it pulled the information from the management commentary. It’s a bit like dealing with a clever analyst who wants to do just enough to keep you happy but with limited effort. The good thing is that you can push back and insist on better information.
4) You can upload 20 documents into the knowledge section of the custom GPT. There is no limit to how large these files can be, but if they are long PDFs for example, they can require a lot of system time to digest them fully. One short-cut is to populate a spreadsheet with key criteria from each company – for example, when analyzing Q4-2024 reports, an easy comparison to previous quarters can be made if they are embedded in a single spreadsheet rather than having to refer to the report for the previous quarter. For each new quarter, the updated spreadsheet is re-loaded into the system’s knowledge section as a first step in the analysis. This spreadsheet acts as a proxy for system memory. (Apparently, this can be programmed in python but I have no idea how to even begin with something like that.)
Parting comments
Thanks to the brave heroes who are still reading.
While my efforts with ChatGPT are, at times, frustrating, I am starting to see tangible benefits and an increasing ability to move beyond the details that the investor relations crowd wants me to see and into the really important stuff in the fine print. I’ll keep working on this and provide updates as I make progress or my head explodes.
Please let me know what you think via the comments section below, LinkedIn or by email at Info@AlanWCook.com.