Bloomberg announced BloombergGPT which looks incredible (direct link to the paper). I think this is a glimpse into the future of LLM models - the idea of domain-specific models, as suggested in the paper, to optimise processes and output.
What will be interesting to see in the finance/trading field is how democratised data insight through LLM, and the ease of access to a larger number of people, would affect the alpha.
On the one hand, having raw data analysed differently by different companies means that some companies may identify insights others haven't and could potentially trade on them, thus outperforming the market. On the other hand, it now potentially means that pressure is put on the rest of the trading pipeline, something that LLMs may not necessarily be able to do (?) (e.g. how quickly can your systems trade on the data based on your preferred horizons, how well can you update and run backtesters to work efficiently with LLM data, what's the optimal parameterization of a portfolio to trade with etc...)
I think this is a microcosm of the bigger effects of domain specific LLMs - they'll put pressure, and create new job roles and remits, on not just the data side that LLMs produce but the rest of the technical and business pipelines to optimise their functions to capitalise on the LLM data - e.g. imagine combining trained decision trees to generate specific prompts with LLMs to reduce hallucination or gather hidden insights.
LLMs being just one of many forms of AI models, it'll be interesting to see if other forms of AI models are implemented elsewhere in the system's execution path to collaborate with the new LLMs coming out to take full advantage. A lot of focus on data at the moment, which is critical, but the knock on effects on data utilisation research will be interesting to watch.