Text Generation
Transformers
PyTorch
English
code
codegen
Diff Model
causal-lm
code-generation
The Pile
Instructions to use CarperAI/diff-codegen-350m-v1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CarperAI/diff-codegen-350m-v1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="CarperAI/diff-codegen-350m-v1")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("CarperAI/diff-codegen-350m-v1") model = AutoModelForCausalLM.from_pretrained("CarperAI/diff-codegen-350m-v1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use CarperAI/diff-codegen-350m-v1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "CarperAI/diff-codegen-350m-v1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CarperAI/diff-codegen-350m-v1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/CarperAI/diff-codegen-350m-v1
- SGLang
How to use CarperAI/diff-codegen-350m-v1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "CarperAI/diff-codegen-350m-v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CarperAI/diff-codegen-350m-v1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "CarperAI/diff-codegen-350m-v1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "CarperAI/diff-codegen-350m-v1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use CarperAI/diff-codegen-350m-v1 with Docker Model Runner:
docker model run hf.co/CarperAI/diff-codegen-350m-v1
File size: 3,645 Bytes
f0b6dd6 5585985 f0b6dd6 5585985 f0b6dd6 010bc36 f0b6dd6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ---
language:
- en
- code
license: "mit"
tags:
- Diff Model
- pytorch
- causal-lm
- code-generation
- The Pile
---
**Model Description**
Diff-Codegen-350M is the first in a series of diff models released by CarperAI. A diff model is an autoregressive language model trained on edits to a piece of text, formatted in Unified Diff Format. These diff models can suggest, given a section of text and a description of the desired change, an intelligent change to the text that fits the description, marking the lines added, changed, and deleted in diff format. The primary use case for these models is for suggesting changes to code—as such, most models we release will be fine-tuned versions of models trained on code datasets.
Diff-Codegen-350M-v1 is an initial preliminary release of an experimental artifact and should be treated as such. We are releasing these results and this model in the hopes that it may be useful to the greater research community, especially those interested in LMs for code.
CarperAI will be releasing larger diff LMs trained on larger code datasets in the near future, building on this initial release.
**Training Data**
This model is a fine-tune of Codegen-350m-mono by Salesforce. This language model was first pre-trained on The PIle, an 800Gb dataset composed of varied web corpora. The datasheet and paper for the Pile can be found here and here respectively. The model was then fine-tuned on a large corpus of code data in multiple languages, before finally being fine-tuned on a Python code dataset. The Codegen paper with full details of these datasets can be found here.
Our diff model was trained on a dataset of commits from BigQuery, a large-scale dataset of many programming languages from GitHub repositories. We filtered the dataset by the number of stars in the repository (>100 stars), license (only open-source non-copyleft licensed code included), and length of file (files greater than 2048 tokens in length were excluded).
The model was trained using the GPT-2 tokenizer.
**Training Details**
The model was trained for 44574 steps (1 epoch) on 8 A100 GPUs.
Each file was formatted as follows for input to the language model:
```
<NME> {FILE_NAME}
<BEF> {INPUT_FILE}
<MSG> {COMMIT_MESSAGE}
<DFF> {FILE_DIFF}
```
**Intended Uses and Limitations**
Due to the model’s small size and restriction to code, one should not expect the model to generalize to domains beyond code and perform (successful) reasoning over large chunks of code. This model is intended to be used in prototyping ELM-like systems, and for solely experimental purposes. This model is provided without warranty and should not be used in commercial settings -- even though the license permits.
**Limitations and Biases**
Due to the short context length restriction and due to the fact that all repositories with under 100 stars were excluded, we expect our diff model to underperform on underrepresented languages, for instance Lean or Coq.
The output of this model should not be trusted as correct and secure code. This model should not be used in any mission critical setting where security is of importance. Similarly, when running the output of this model, it should be done in a sandbox like gVisor.
**Evaluation Results**
Since this model was trained for prototyping, no evaluation has been performed. Future releases will have extensive evaluation.
**Licensing**
This model is licensed as MIT. While it can be used in commercial settings, we do not recommend its use in commercial settings.
**Acknowledgements**
We’d like to thank Honglu Fan, Harry Saini, Herbie Bradley, and Joel Lehman
|