{
"model" : "google/gemini-3.1-pro-preview",
"reasoning" :{
"effort" : medium"
}
"messages" : [
{ "type": "text" , "text" : "Identify yourself"}
]
}
results in generation id with : "native_tokens_reasoning": 0,
If I set reasoning to "medium" on the openrouter's own chat application, I get generation id with : "native_tokens_reasoning": 710,
Adding or removing other parameters like max_tokens, temperature, etc doesn't fix it.
While I don't need reasoning in all requests, I would like to take advantage of it when needed, especially on models that support it, like Gemini.
{
"model" : "google/gemini-3.1-pro-preview",
"reasoning" :{
"effort" : medium"
}
"messages" : [
{ "type": "text" , "text" : "Identify yourself"}
]
}
results in generation id with : "native_tokens_reasoning": 0,
If I set reasoning to "medium" on the openrouter's own chat application, I get generation id with : "native_tokens_reasoning": 710,
Adding or removing other parameters like max_tokens, temperature, etc doesn't fix it.
While I don't need reasoning in all requests, I would like to take advantage of it when needed, especially on models that support it, like Gemini.