Experiment with inferencing the IBM Granite Code foundation models in watsonx.ai to help you accomplish coding tasks.
The Granite series of decoder-only code models are enterprise-grade foundation models that are instruction-tuned for code generative tasks, such as fixing bugs, explaining code, and documenting code.
The models are trained on license-permissible data that was collected by following AI Ethics principles, and with a process that was guided by the IBM Corporate Legal team for trustworthy enterprise usage.
The following instruction-tuned Granite Code foundation models are available from watsonx.ai. You can click the model name to open the model card.
- granite-3b-code-instruct model card
- granite-8b-code-instruct model card
- granite-20b-code-instruct model card
- granite-34b-code-instruct model card
Inferencing the Granite Code models
To get the best results when using the Granite Code foundation models, first follow these recommendations and then experiment to get the results that you want.
The following table lists the recommended model parameters for prompting the Granite Code foundation model for coding tasks.
Parameter | Recommended value or range | Explanation |
---|---|---|
Decoding | Greedy | Greedy decoding chooses tokens from only the most-probable options, which is best when you want the model to follow instructions and be less creative. |
Repetition penalty | 1.05 | Set the penalty to this low value to prevent the chatbot from sounding robotic by repeating words or phrases. |
Stopping criteria | <|endoftext|> | A helpful feature of the Granite Code foundation model is the inclusion of a special token that is named <|endoftext|> at the end of each response. When some generative models return a response to the input in fewer tokens than the maximum number allowed, they can repeat patterns from the input. This model prevents such repetition by incorporating a reliable stop sequence for the prompt. |
Max tokens | 900 | The maximum context window length for the code models is 8,192. For more information about tokens, see Tokens and tokenization. |
For more information about the model parameters, see Model parameters for prompting.
Prompting the models from the Prompt Lab
To prompt the Granite Code foundation model, complete the following steps:
-
From the Prompt Lab in freeform mode, choose one of the available Granite Code foundation models.
-
From the Model parameters panel, apply the recommended model parameter values from Table 1.
-
Add your prompt, and then click Generate.
You can use prompt samples from the Try it out section.
For more information about using the Prompt Lab, see Prompt Lab.
Tips for prompting the Granite Code models
-
If the response is interrupted, increase the maximum output tokens setting to ensure the model does not cut off the code response, resulting in incomplete code.
-
Do not add extra whitespaces. Include only one line break at the end of the prompt.
Optional system prompt
Prompts that you submit to Granite Code models do not require a system prompt. However, label the Question
and Answer
to help the model understand the request, as shown in the following template:
Question:
{PROMPT}
Answer:
If the response from the model is invalid or unexpected, try adding a system prompt. Use the same system prompt that was used when the models were instruction-tuned:
You are an intelligent AI programming assistant, utilizing a Granite code language model developed by IBM. Your primary function is to assist users in programming tasks, including code generation, code explanation, code fixing, generating unit tests, generating documentation, application modernization, vulnerability detection, function calling, code translation, and all sorts of other software engineering tasks.
You can copy and paste the following template that includes the system prompt:
System:
"You are an intelligent AI programming assistant, utilizing a Granite code language model developed by IBM. Your primary function is to assist users in programming tasks, including code generation, code explanation, code fixing, generating unit tests, generating documentation, application modernization, vulnerability detection, function calling, code translation, and all sorts of other software engineering tasks."
Question:
{PROMPT}
Answer:
You are an intelligent AI programming assistant, utilizing a Granite code language model developed by IBM. Your primary function is to assist users in programming tasks, including code generation, code explanation, code fixing, generating unit tests, generating documentation, application modernization, vulnerability detection, function calling, code translation, and all sorts of other software engineering tasks.
Try it out
Try these sample prompts:
Supported programmatic languages
The Granite code foundation models support the following programming languages:
- ABAP
- Ada
- Agda
- Alloy
- ANTLR
- AppleScript
- Arduino
- ASP
- Assembly
- Augeas
- Awk
- Batchfile
- Bison
- Bluespec
- C
- C-sharp
- C++
- Clojure
- CMake
- COBOL
- CoffeeScript
- Common_Lisp
- CSS
- Cucumber
- Cuda
- Cython
- Dart
- Dockerfile
- Eagle
- Elixir
- Elm
- Emacs_Lisp
- Erlang
- F-sharp
- FORTRAN
- GLSL
- GO
- Gradle
- GraphQL
- Groovy
- Haskell
- Haxe
- HCL
- HTML
- Idris
- Isabelle
- Java
- Java_Server_Pages
- JavaScript
- JSON
- JSON5
- JSONiq
- JSONLD
- JSX
- Julia
- Jupyter
- Kotlin
- Lean
- Literate_Agda
- Literate_CoffeeScript
- Literate_Haskell
- Lua
- Makefile
- Maple
- Markdown
- Mathematica
- Objective-C++
- OCaml
- OpenCL
- Pascal
- Perl
- PHP
- PowerShell
- Prolog
- Protocol_Buffer
- Python
- Python_traceback
- R
- Racket
- RDoc
- Restructuredtext
- RHTML
- RMarkdown
- Ruby
- Rust
- SAS
- Scala
- Scheme
- Shell
- Smalltalk
- Solidity
- SPARQL
- SQL
- Stan
- Standard_ML
- Stata
- Swift
- SystemVerilog
- Tcl
- Tcsh
- Tex
- Thrift
- Twig
- TypeScript
- Verilog
- VHDL
- Visual_Basic
- Vue
- Web_Ontology_Language
- WebAssembly
- XML
- XSLT
- Yacc
- YAML
- Zig
Parent topic: IBM foundation models