category
OpenDevin: Code Less, Make More
demo-video.webm
Mission 🎯
Welcome to OpenDevin, an open-source project aiming to replicate Devin, an autonomous AI software engineer who is capable of executing complex engineering tasks and collaborating actively with users on software development projects. This project aspires to replicate, enhance, and innovate upon Devin through the power of the open-source community.
Work in Progress
OpenDevin is still a work in progress. But you can run the alpha version to see things working end-to-end.
Requirements
- Linux, Mac OS, or WSL on Windows
- Docker
- Python >= 3.10
- NodeJS >= 14.8
Installation
First, pull our latest sandbox image here
docker pull ghcr.io/opendevin/sandbox
Note: you need to be able to run docker
without sudo
Then copy config.toml.template
to config.toml
. Add an OpenAI API key to config.toml
, or see below for how to use different models.
LLM_API_KEY="sk-..."
Next, start the backend:
python -m pip install pipenv python -m pipenv install -v python -m pipenv shell uvicorn opendevin.server.listen:app --port 3000
If pipenv
doesn't work for you, you can also run:
python -m pipenv requirements > requirements.txt && python -m pip install -r requirements.txt
Then, in a second terminal, start the frontend:
cd frontend npm install npm start
You'll see OpenDevin running at localhost:3001
Picking a Model
We use LiteLLM, so you can run OpenDevin with any foundation model, including OpenAI, Claude, and Gemini. LiteLLM has a full list of providers.
To change the model, set the LLM_MODEL
and LLM_API_KEY
in config.toml
.
For example, to run Claude:
LLM_API_KEY="your-api-key" LLM_MODEL="claude-3-opus-20240229"
You can also set the base URL for local/custom models:
LLM_BASE_URL="https://localhost:3000"
And you can customize which embeddings are used for the vector database storage:
LLM_EMBEDDING_MODEL="llama2" # can be "llama2", "openai", "azureopenai", or "local"
Running on the Command Line
You can run OpenDevin from your command line:
PYTHONPATH=`pwd` python opendevin/main.py -d ./workspace/ -i 100 -t "Write a bash script that prints 'hello world'"
🤔 What is Devin?
Devin represents a cutting-edge autonomous agent designed to navigate the complexities of software engineering. It leverages a combination of tools such as a shell, code editor, and web browser, showcasing the untapped potential of LLMs in software development. Our goal is to explore and expand upon Devin's capabilities, identifying both its strengths and areas for improvement, to guide the progress of open code models.
🐚 Why OpenDevin?
The OpenDevin project is born out of a desire to replicate, enhance, and innovate beyond the original Devin model. By engaging the open-source community, we aim to tackle the challenges faced by Code LLMs in practical scenarios, producing works that significantly contribute to the community and pave the way for future advancements.
⭐️ Research Strategy
Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:
- Core Technical Research: Focusing on foundational research to understand and improve the technical aspects of code generation and handling.
- Specialist Abilities: Enhancing the effectiveness of core components through data curation, training methods, and more.
- Task Planning: Developing capabilities for bug detection, codebase management, and optimization.
- Evaluation: Establishing comprehensive evaluation metrics to better understand and improve our models.
🛠 Technology Stack
- Sandboxing Environment: Ensuring safe execution of code using technologies like Docker and Kubernetes.
- Frontend Interface: Developing user-friendly interfaces for monitoring progress and interacting with Devin, potentially leveraging frameworks like React or creating a VSCode plugin for a more integrated experience.
🚀 Next Steps
An MVP demo is urgent for us. Here are the most important things to do:
- UI: a chat interface, a shell demonstrating commands, a browser, etc.
- Architecture: an agent framework with a stable backend, which can read, write and run simple commands
- Agent: capable of generating bash scripts, running tests, etc.
- Evaluation: a minimal evaluation pipeline that is consistent with Devin's evaluation.
After finishing building the MVP, we will move towards research in different topics, including foundation models, specialist capabilities, evaluation, agent studies, etc.
How to Contribute
OpenDevin is a community-driven project, and we welcome contributions from everyone. Whether you're a developer, a researcher, or simply enthusiastic about advancing the field of software engineering with AI, there are many ways to get involved:
- Code Contributions: Help us develop the core functionalities, frontend interface, or sandboxing solutions.
- Research and Evaluation: Contribute to our understanding of LLMs in software engineering, participate in evaluating the models, or suggest improvements.
- Feedback and Testing: Use the OpenDevin toolset, report bugs, suggest features, or provide feedback on usability.
For details, please check this document.
- 登录 发表评论