user23456 Hi to clarify, in general we set the default sampling parameters to what the model providers recommend. Beyond that, the assistant also has an experimental feature for power users where you can customize the parameters with slash commands. You can add these commands to either your query or your custom instructions. For example:
/force_research - forces the model to do any kind of research before responding (search or retrieval)
/json - makes the model output purely in JSON
/system_prompt_overwrite - makes custom instructions the only thing we put in the system prompt (by default we add the custom instructions in with our own instructions)
/temperature [0-1] - set temperature of generated answer. Value should be float (e.g. 0.2, 0.5, 1.0, etc.)
/reasoning_effort [low/medium/high] - set reasoning effort for reasoning models
- also accepts
minimal as a value but only gets used for models that support this value (e.g. GPT 5 models)
/verbosity [low/medium/high] - set verbosity for models that support this parameter (GPT 5 models)
/top_p [0-1] - controls text generation by selecting only the smallest subset of tokens whose cumulative probability reaches a specified threshold (like 95%), rather than considering all possible tokens. Value is a float.
/require [search/retrieval] - specify a tool to be a requirement for your query, ensuring that it will run
- In addition to the above values, ki can also take
code, image_gen, image_edit, wolfram, and map
When parsed correctly, a config dropdown will show you what commands were parsed. But do note that this is an experimental feature so expect the unexpected. In the long term we plan to move these settings to the UI so that they have better discoverability.