Skip to content

Open source simplified implementation of OpenAi's musenet

Notifications You must be signed in to change notification settings

hidude562/OpenMusenet2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 

Repository files navigation

OpenMusenet2

Open source WIP recreation of OpenAi's musenet. This supports many of the features of the original Musenet by OpenAI such as multiple track support (altough not guided to specific instrument), 4 levels of dynamics, as well as the note start, length, and note.

Generating

Here is the Google Colab notebook for generating.

Samples

Fur Elise https://github.com/hidude562/OpenMusenet2/assets/82677882/692dd270-8ffd-4967-9af7-d4aa612fbaf8

Allca turra https://github.com/hidude562/OpenMusenet2/assets/82677882/6010b13e-1597-4604-8489-1156b0362cf6

Technical things

The current model as of writing this is "OpenMusenet2.1", which is a finetuned version of gpt-2 medium on ~10,000 songs (Around 20kb per song). I don't remember where i got the dataset from (I had actually downloaded it the year prior), but it is ~169,000 midi files of types 0 and 1 with multiple tracks, tempo changes, etc. (although tempo changes and stuff are ignored)

Training your own model/dataset

Go to "Notebooks" -> "Converters" -> "midiFormater.ipynb" and you can open that with Google Colab (or whatever notebook editor you use). The process from there should be relatively simple.

Once you've downloaded your data the process there will vary depending on what notebook you are using to train so i can't really ellaborate on that.

Improvement ideas

  • Finetune interference params for model (top_k, temperature...) (you can help too!)
  • Train gpt-2 774m or model of large context size
  • Large version of model
  • Some midis are 10x the playback speed of what it should be (AI emulates this behavior)

Releases

No releases published

Packages

No packages published