Google 《Prompt Engineering v7》Writer Joey Haymaker Designer Michael Lanning Introduction 6 Prompt engineering 7 LLM output configuration 8 Output length 8 Sampling controls 9 Temperature 9 Top-K and top-P 10 Putting it all together AI or by using the API, because by prompting the model directly you will have access to the configuration such as temperature etc. This whitepaper discusses prompt engineering in detail. We will look configurations of a LLM. LLM output configuration Once you choose your model you will need to figure out the model configuration. Most LLMs come with various configuration options that control the LLM’s output0 码力 | 68 页 | 6.50 MB | 6 月前3
OpenAI 《A practical guide to building agents》case and layer in additional ones as you uncover new vulnerabilities. Guardrails are a critical component of any LLM-based deployment, but should be coupled with robust authentication and authorization0 码力 | 34 页 | 7.00 MB | 6 月前3
DeepSeek-V2: A Strong, Economical, and Efficient
Mixture-of-Experts Language Modelvectors and the intermediate hidden states of routed experts) to ensure stable training. Under this configuration, DeepSeek-V2 comprises 236B total parameters, of which 21B are activated for each token. Training expert is 1408. Among the routed experts, 6 experts will be activated for each token. Under this configuration, DeepSeek-V2-Lite comprises 15.7B total parameters, of which 2.4B are activated for each token0 码力 | 52 页 | 1.23 MB | 1 年前3
共 3 条
- 1













