Parameter values
Each call that you send to a model includes parameter values that control how the model generates a response. The model generates different results for different parameter values. For this task, you experiment with different parameter values to get the best results for the task.
The parameters available for different models may differ, but the most common are:
Temperature
Temperature controls the degree of randomness in token selection, when Top-K and Top-P are applied. Lower temperatures are good for prompts that require a more deterministic and less open-ended or creative response, while higher temperatures can lead to more diverse or creative results. A temperature of 0 is deterministic, meaning that the highest probability response is always selected.
For most use cases, try starting with a temperature of 0.2. If the model returns a response that's too generic, too short, or the model gives a fallback response, try increasing the temperature.
Token limit
Token limit determines the maximum amount of text output from one prompt. A token is approximately four characters. The default value is 256.
Specify a lower value for shorter responses and a higher value for longer responses.
Top-K
Top-K changes how the model selects tokens for output. A Top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a Top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature.
For each token selection step, the Top-K tokens with the highest probabilities are sampled. Then tokens are further filtered based on Top-P with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more random responses. The default Top-K is 40.
Top-P
Top-P changes how the model selects tokens for output. Tokens are selected from the most (see Top-K) to least probable until the sum of their probabilities equals the Top-P value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the Top-P value is 0.5, then the model will select either A or B as the next token by using temperature and excludes C as a candidate.
Specify a lower value for less random responses and a higher value for more random responses. The default Top-P is 0.80.
We help you develop and implement a winning strategy for the Generative AI revolution.
Secure your generative AI solution with our comprehensive security consulting services.
Enabling your company to harness the full potential of AI while minimizing risks.
Call: +49 2151 9168231
E-mail: info(a)bluetuple.ai
47809 Krefeld, Germany
Copyright © All Rights reserved.