Skip to content

Making LLMs generate entire projects. Go from idea to runnable project in one step.

License

Notifications You must be signed in to change notification settings

greshake/unreal-project-extractor

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Unreal Project Extractor

A demo project to show how any language model capable of generating code can be used to generate entire projects. You can enter a name and description for a project, and the code is recursively retrieved and realized from an imaginary repository. See the generated_example_project which was generated by ChatGPT using this project and the following prompt:

Name: webhoppr
Description: A simple web-based blog using flask.

This project is a proof-of-concept, mostly concerned with the feasibility of this approach. There are still a few caveats and limitations, and I provide suggestions for resolving them below.

Method

I use the Linux terminal emulation trick to get a well-defined interface to the language model that allows for recursive extraction of imaginary file systems, automating the entire pipeline from idea to "project uploadable to GitHub". This is significantly more powerful and expressive than the original style of outputting small code snippets and allows for more complex code to be generated, embedded in an entire project context.

Unreal Computing

This project was part of my deep dive into ChatGPT when it came out. During this time, I've realized that LLMs of this kind can act as an almost universal emulator- Linux terminals being the best-known example of this. But also obscure 80s CNC machine terminals and even humans. Unlike their "real" computing counterparts though, LLM representations of them are very malleable. Have a nasty bug? Just do "{imagine this bug didn't exist}" and it's gone. Want to add a feature? Just do "{imagine this feature existed}" and it's there. The code itself becomes unnecessary. This is the power of unreal computing. It seems possible to me that we will eventually operate in entirely unreal computing environments, which maybe call out to one or more underlying "real" computing systems. This project is another example of this, extracting entirely unreal/imagined repositories from the latent space of a language model and bridging the gap to a real project that you can upload to GitHub. Unreal Computing is very much real.

Usage

First, you have to decide whether to use ChatGPT or the regular models from the official API. I've done the development and testing with ChatGPT using an unofficial API (the feasibility of which may have changed), but the script is set up to use the official models by default. However I couldn't test this setup (no official API key), so feel free to open a PR to fix any issues you find.

  1. pip install -r requirements.txt
  2. OPENAI_API_KEY=your_api_key python generate_extractor.py'
  3. You will be asked for a name and description, and the output project will be at output/$NAME

Caveats and Limitations

  • Just a proof-of-concept, code bad. If you want to use this project take the method, idea, and prompts- leave the code.
  • Context Window
    • Because the context window is limited, we can currently not generate very large coherent projects. This can be ameliorated by increasing the context window or repeating the prompt and key files regularly.
  • Order of generating files
    • The order in which imaginary files are selected to be generated matters. They need to be generated in sequence to be coherent and fit together, but the order might affect the content. It is, for example, better to generate files with higher levels of abstraction (like README.md) to be generated first. Definitely avoid depth-first generation, and manually fix the order of certain files.
  • Reliability and robustness
    • The "well-defined interface" of a terminal turns out to be less than that when working with a language model. Syntax of outputs like "ls" might change suddenly or appear as if with different parameters. In those cases, the text history needs to be rolled back.
  • Licensing/copyright
    • If you describe a project that is actually real, GPT will happily attempt to reconstruct the best version of that repository it can muster, including imaginary licenses and real or unreal names of people. So check first if you're looking at a "novel" GPT project or a (lossy) compressed version of a real project with the same name.

The Future

These are some of the things I'm sure we will see in the coming months, and I may tackle for more demos and PoCs:

  • Iterative refinement: simple interfaces that allow you to refine the content of specific files using natural-language instruction
  • Improving real projects by drafting PRs: By interleaving real and imagined code, we can take action in the real world. What starts out as an imaginary repository might turn into a real project, the code of which is then uploaded into the context window again together with upcoming issues, enabling automatic PR drafting. Microsoft will probably enable this on all GitHub projects in the future. Issue? Click "Draft PR" and go from there.

About

Making LLMs generate entire projects. Go from idea to runnable project in one step.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published