OpenAI LLM function calling
November 29, 2024
In this article I will show you how to use OpenAI to call a function in your own code-base and use the result as part of the conversation with a user of an LLM chat application. I focus only on the API request body itself and not the actual code.
A chat application does multiple calls to the OpenAI chat completions api during a user session : for every user message the API is called. Because the LLM needs to have the context of the conversation, the whole conversation is send with each request to the API in a structured way. This way the LLM can give a better response to the user.
Note: also the answers of the LLM itself to questions of the user needs to be send back to the API as it is part of the conversation (in the below JSON example these are the responses where the role is “assistant”).
The specs for the OpenAI chat completions API can be found here: Chat completions API
One powerfull feature is “function calling”: you can define a function in the request to the API and call it when needed by parsing the API response. Because you specify a description, the LLM uses that to identify the function. Off course your code needs to call the function itself and return the results to the LLM by calling the Chat completions API again with the results.
This is the full JSON request for a call to the OpenAI chat completions api where a function “get_current_weather” is defined and called as a result of 2 questions from the user (explanation continues below the JSON).
{
"messages": [
{
"role": "system",
"content": "you're a helpful assistant"
},
{
"role": "user",
"content": "whats the current weather in the netherlands?"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "1092d64f-9019-4f34-af3c-31c0c4d2c7c8",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\":\"get_current_weather\",\"unit\":\"celsius\"}"
}
}
]
},
{
"role": "tool",
"content": "the weather is fine",
"tool_call_id": "1092d64f-9019-4f34-af3c-31c0c4d2c7c8"
},
{
"role": "assistant",
"content": "The current weather in the Netherlands is fine."
},
{
"role": "user",
"content": "whats the current weather in iceland?"
},
{
"role": "assistant",
"tool_calls": [
{
"id": "cec2d1d3-ef92-451e-b75a-59ac19bb1b02",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\":\"get_current_weather\",\"unit\":\"celsius\"}"
}
}
]
},
{
"role": "tool",
"content": "the weather is fine",
"tool_call_id": "cec2d1d3-ef92-451e-b75a-59ac19bb1b02"
}
],
"temperature": 0.6,
"model": "gpt-4-turbo",
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state or country, e.g. San Francisco, CA .. another example is The Netherlands"
},
"unit": {
"type": "string",
"enum": [
"celsius",
"fahrenheit"
]
}
},
"required": [
"location"
]
}
}
}
],
"tool_choice": "auto"
}
In the above JSON example you see that the function “get_current_weather” is defined in the “tools” array. The function has a description and parameters. The function is called in the “messages” array by the assistant. The result of the function is send back to the API in the “tool” message.
This is the API response which has the function call when the user asks for the current weather in the Netherlands:
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1699896916,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\n\"location\": \"The Netherlands\"\n}"
}
}
]
},
"logprobs": null,
"finish_reason": "tool_calls"
}
],
}
In your own code, you call the function “get_current_weather” and return the result to the API by calling the Chat completions API again with the results. The LLM will then use the result in the conversation with the user. An example of the request can be seen in the first JSON example on this page.
If you have any questions or feedback, feel free to reach out to me here: maikel@devhelpr.com