Member-only story

Experimenting With Temperature (LLMs)

Subhaditya Mukherjee
9 min readJul 6, 2024

--

Temperature
Visualizing Temperature Using Softmax
Creating the Experimental Setup
Defining a Prompt
Creating a Chain
Parsing the Results
Running the Experiments and Plotting Results
Experiment 1
Experiment 2
Experiment 3
Experiment 4
Conclusion

Over the past few months at I have been experimenting with LLM models in an attempt to improve the search experience for our users. While our existing implementation uses ElasticSearch, we wanted to also have the option of having a more “semantic” search experience.

Aside from the usual RAG pipeline that everyone and their grandparents seems to be using these days, we also wanted to experiment with using an LLM to semi-automatically generate filters for our search queries. While it may not seem like a big feature, it is something that has always been a bit of an annoyance for some of our users.

So what does this entail? Consider the interface we have at the moment. We have a search bar at the top, and subsequently a bunch of filters that users can use to narrow down their search. While this works pretty well as is, how about trying to automate it a bit.

--

--

Subhaditya Mukherjee
Subhaditya Mukherjee

Written by Subhaditya Mukherjee

My aim is to push the boundaries of what we deem possible and contribute to the community along the way.

No responses yet