Replit Unveils Code LLM, a Groundbreaking Open Source Language Model for Code Completion
Replit, an innovative online coding platform, has recently released its new open source language model for code completion, Replit Code LLM (replit-code-v1-3b). The model is 77% smaller than OpenAI’s Codex and was trained in just one week. The release marks the beginning of a series of Language Models (LLMs) from Replit.
The model weights and vocabulary are released under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license, which allows for commercial use. The two-person AI team behind Replit’s project has made significant strides in the field, with one team member previously serving as the head of applied research at Google.
Developers can access Replit’s open-source LLM on the HuggingFace organization page. Despite being a base model, it possesses considerable power for its size, and there is more potential to be harnessed through finetuning. Replit has reported a 50% performance boost through this process, and the company eagerly anticipates the community’s utilization of the model.
Upcoming releases include the UL2R version of the model, which will support infilling, and a detailed tech report on replit-code-v1-3b. The report will also share insights and learning experiences from training a “small” LM on an extensive dataset of code.
Surprisingly, the model has demonstrated strong performance in language tasks and reasoning, often outperforming larger general-purpose models. There is potential for this model to be finetuned into a half-decent code-focused chatbot by using conversation or instruction datasets.
Finetuning is a relatively quick process that only requires a small fraction of the resources needed for training the base model. The idea of companies finetuning their own internal “ChatGPT”s has emerged as a significant finding. The base model checkpoint is licensed under the Creative Commons license (CC BY-SA-4.0), requiring credit to Replit, a link to the license, and an indication of any changes made.
The Replit Code LLM was trained on the most popular languages on Replit, with Markdown playing a crucial role in providing a natural language component. The model is highly portable and supports multiple languages. It has shown promising results in comparison to mono models on benchmarks, such as those from Salesforce/Codegen.
Although the model is not instructed like a chatbot, it is possible to finetune it for more interactive use. To improve full function completion, providing a function definition and a descriptive docstring as a prompt is recommended. As LLMs sometimes generate unrelated content, frontend tools like Copilot and Ghostwriter use stopwords or limit completions to a single block to refine the output.
Replit’s Code LLM has the potential to reshape code completion and inspire new developments in AI technology. With its compact size, impressive performance, and open-source availability, the Replit Code LLM is poised to make waves in the coding world.
{
"seed": 3056674261,
"used_random_seed": true,
"negative_prompt": "",
"num_outputs": 1,
"num_inference_steps": 75,
"guidance_scale": 7.5,
"width": 512,
"height": 512,
"vram_usage_level": "balanced",
"sampler_name": "euler",
"use_stable_diffusion_model": "Dreamshaper_3.32_baked_vae_clip_fix",
"use_vae_model": "vae-ft-mse-840000-ema-pruned",
"stream_progress_updates": true,
"stream_image_progress": false,
"show_only_filtered_image": true,
"block_nsfw": false,
"output_format": "jpeg",
"output_quality": 75,
"output_lossless": false,
"metadata_output_format": "json",
"original_prompt": "Replit Unveils Code LLM, deepleaps.com",
"active_tags": [
"Fantasy",
"Realistic",
"Photo",
"Surrealist",
"Excited",
"Surprised"
],
"inactive_tags": [],
"use_upscale": "RealESRGAN_x4plus",
"upscale_amount": "4",
"prompt": "Replit Unveils Code LLM, deepleaps.com, Fantasy, Realistic, Photo, Surrealist, Excited, Surprised",
"use_cpu": false
}