Discover your dream Career
For Recruiters

ChatGPT secrets from the newest $300k salary jobs in finance

Prompt engineering is a skill we're all going to need to learn sooner or later. ChatGPT, Bard and other chatbots operating off large language models (LLMs) have the potential to revolutionize workflow, but there's important information you need to know that can make the way you interact with AI much more impactful. 

Last week, a research paper from Stanford University, in partnership with University of California, Berkeley and research firm Samaya AI, examined how the length of prompts given to AI models impacted their ability to accurately perform a given command. It found that prompts get exponentially harder to execute the longer they became and, while information from the start and end of prompts was still useful, information in the middle often got disregarded.

Why is this? The study notes that language models are "generally implemented with Transformers, which scale poorly to long sequences." They have what's called a 'context window' which is how many words surrounding a key word are taken into account when a language model generates a response. 

There is a "distinctive U curve" in accuracy for language models where, the closer to the middle of a dataset, key information is, the lower the chances of a model noticing it are. In a test in which models were asked to retrieve specific information from a series of different documents, its performance when the information was in the middle was "lower than its performance when predicting without any documents" at all.

So why not just make the context window wider? After all, a study from Meta scientists published a month earlier evidenced that it could increase context window size by up to sixteen times using Position Interpolation. However, the Stamford study notes that results when the full prompt didn't fit in the window saw a "nearly superimposed" curve in its results.

It's not just students and tech firms conducting this kind of research, a number of financial institutions are also hiring to test the limits of AI bots. Bloomberg, which is creating BloombergGPT, is hiring for multiple senior AI researchers that can earn a salary upwards of $300k. JPMorgan, which is developing financial advice bot IndexGPT, is hiring AI focused staff at a significant rate comparative to other banks, in both a production and research capacity. If you want any hope of succeeding in those roles, you'll learn how to put important prompt information as far away from the middle as possible.

Have a confidential story, tip, or comment you’d like to share? Contact: +44 7537 182250 (SMS, Whatsapp or voicemail). Telegram: @SarahButcher. Click here to fill in our anonymous form, or email Signal also available  

Click here to create a profile on eFinancialCareers. Make yourself visible to recruiters hiring for top jobs in technology and finance. 

Bear with us if you leave a comment at the bottom of this article: all our comments are moderated by human beings. Sometimes these humans might be asleep, or away from their desks, so it may take a while for your comment to appear. Eventually it will – unless it’s offensive or libelous (in which case it won’t.)

AUTHORAlex McMurray Reporter

Sign up to Morning Coffee!

Coffee mug

The essential daily roundup of news and analysis read by everyone from senior bankers and traders to new recruits.

Boost your career

Find thousands of job opportunities by signing up to eFinancialCareers today.
Recommended Articles
Recommended Jobs

Sign up to Morning Coffee!

Coffee mug

The essential daily roundup of news and analysis read by everyone from senior bankers and traders to new recruits.