MIDI composition with {dplyr}, {midiblender}, and {pyramidi}
midiblender
pyramidi
rstats
dplyr
midi composition
Trying some dplyr composition patterns for creating midi files
Author
Matt Crump
Published
February 12, 2024
Code
from diffusers import DiffusionPipelinefrom transformers import set_seedfrom PIL import Imageimport torchimport randomimport sslimport osssl._create_default_https_context = ssl._create_unverified_context#locate library#model_id = "./stable-diffusion-v1-5"model_id ="dreamshaper-xl-turbo"pipeline = DiffusionPipeline.from_pretrained( pretrained_model_name_or_path ="../../../../bigFiles/huggingface/dreamshaper-xl-turbo/")pipeline = pipeline.to("mps")# Recommended if your computer has < 64 GB of RAMpipeline.enable_attention_slicing("max")prompt ="composing music with rstats. dplyr musical notes. tidyverse. hexagons everywhere. 3d. music inside hexagons floating in the universe. musical notes inside the hexagons. cartoon. linocut"for s inrange(30):for n in [5,10]: seed = s+21 num_steps = n+1 set_seed(seed) image = pipeline(prompt,height =1024,width =1024,num_images_per_prompt =1,num_inference_steps=num_steps) image_name ="images/synth_{}_{}.jpeg" image_save = image.images[0].save(image_name.format(seed,num_steps))
Massive modular synthesizer. Eurorack modular synthesizer. Huge synthesizer room, full or modular synthesizers with patch cords connecting everywer. 80s cartoon. Linocut. - dreamshaper
This morning I added some dplyr style functions to midiblender for row-by-row, explicit construction of a data frame containing midi information. I’ll provide some examples here, and use this post to try a few compositional goals with the dplyr approach.
pyramidi also has examples of composition with dplyr syntax that are worth checking out. My approach relies on several pyramidi internal functions, and is pretty similar overall.
The following block of code shows an example of systematically creating a midi data frame, on a row-by-row basis. The functions are mostly wrappers to dplyr::add_row, but they have midi variables in them.
Functions that add a row start with “add”, which makes them easy to find with autocomplete while programming. These functions also pass ..., which is useful for placing a row .before or .after another row.
This code block writes important meta messages, shows that it is possible to write program_change and control_change messages, and writes a couple notes before ending the track.
This shows how to take a midi dataframe like the above, and export it to disk as a .mid file.
First, pyramidi::MidiFramer$new() creates a new pyramidi object.
Upon creation, it is possible to update new_pyramidi_object$ticks_per_beat with a new value. I believe the default value is 960L, and the code below updates it to a smaller value.
The new_midi_df from above is then updated within the pyramidi object using new_pyramidi_object$mf$midi_frame_unnested$update_unnested_mf(new_midi_df).
Finally, new_pyramidi_object$mf$write_file("file_name.mid") writes the file to disk.
Code
#Initialize new pyramidi objectnew_pyramidi_object <- pyramidi::MidiFramer$new()# update ticks per beatnew_pyramidi_object$ticks_per_beat <-96L# update object with new midi dfnew_pyramidi_object$mf$midi_frame_unnested$update_unnested_mf(new_midi_df)# write to midi filenew_pyramidi_object$mf$write_file("file_name.mid")
Composing with a wide data frame, then pivoting to long
Midi files writes note_on and note_off messages in succession with time stamps coding relative time since the last message. For composition, it may be convenient to work with a wider data frame.
Here’s an example of using dplyr to create a series of notes over time. The notes are randomly chosen from the vector possible_notes. The rhythm defining when a note occurs is generated by bresenham_euclidean(), which can produce some nice sounding rhythms. The code chunk will generate as many bars as defined in bars, and the shortest note is a 16th note.
Code
library(dplyr)# note parametersbars <-4# number of barsbar_time_steps <-16# number of time steps in a barnote_duration <-24# note duration in tickspossible_notes <-c(60, 63, 65, 66, 67, 70, 72, 75) # midi note values to pick# create a tibble to compose notes in time compose_notes <- tibble::tibble(note_id =integer(),note =integer(),beat_on =integer(),note_on =integer(),note_off =integer()) %>%# add multiple bars worth of notesadd_row(beat_on =c(replicate(bars,bresenham_euclidean(sample(c(2:15),1), bar_time_steps,start=1))),note =sample(possible_notes, size = bar_time_steps*bars, replace=TRUE) ) %>%# handle note timesmutate(note_id =1:n(),note_on = (1:n()-1)*note_duration,note_off = note_on+note_duration) %>%# keep events where a beat occurredfilter(beat_on ==1) #print to showknitr::kable(head(compose_notes))
note_id
note
beat_on
note_on
note_off
1
65
1
0
24
9
72
1
192
216
17
63
1
384
408
19
65
1
432
456
20
72
1
456
480
21
72
1
480
504
At this point the compose_notes dataframe contains note_on and note_off information in wide format. A quick call to tidyr::pivot_longer, along with subtracting the time stamps to get them into relative time, and we have a dataframe that is nearly ready for export.
Code
compose_notes <- compose_notes %>%# pivot to long tidyr::pivot_longer(c("note_on","note_off"),names_to="type",values_to="time") %>%# relative timemutate(time = time -lag(time,default=0))#print to showknitr::kable(head(compose_notes))
note_id
note
beat_on
type
time
1
65
1
note_on
0
1
65
1
note_off
24
9
72
1
note_on
168
9
72
1
note_off
24
17
63
1
note_on
168
17
63
1
note_off
24
Now we have a body of midi messages. The last step is to put them into a full-fledged midi dataframe, and export them.
Code
# create new midi_df## add to a new midi dfnew_midi_df <-create_empty_midi_df() %>%# initializeadd_meta_track_name(name ="My track") %>%add_meta_tempo(tempo =500000) %>%add_meta_time_sig(numerator =4,denominator =4,clocks_per_click =36,notated_32nd_notes_per_beat =8 ) %>%add_program_change(program =0,channel =0) %>%add_control_change(control =0, value =0) %>%# add new notes <---------------Adding stuff from compose_notesadd_row(i_track =rep(0,dim(compose_notes)[1]), meta =rep(FALSE,dim(compose_notes)[1]),note = compose_notes$note,type = compose_notes$type,time = compose_notes$time,velocity =64) %>%add_meta_end_of_track()#write midi#Initialize new pyramidi objectnew_pyramidi_object <- pyramidi::MidiFramer$new()# update ticks per beatnew_pyramidi_object$ticks_per_beat <-96L# update object with new midi dfnew_pyramidi_object$mf$midi_frame_unnested$update_unnested_mf(new_midi_df)# write to midi filenew_pyramidi_object$mf$write_file("file_name.mid")
Making it less of a wall of code
I’m not working on this right now. Going with walls of code.
Frequency biased sequences
I’ve got a project coming up where I will likely need to generate a whole bunch of “musical” sequences with specific kinds of statistical structure. I’m not sure whether I will use the {dplyr} style code for this. This is a note to future self about what it might look like.
got something that will work later
need to write functions for creating unequal frequency vectors with specific constraints
Move development on this issue to another castle.
Code
# note parametersbars <-4possible_time_steps <-16note_duration <-24possible_notes <-c(60, 63, 65, 66, 67, 70, 72, 75)total_notes <-8total_beats <- bars*possible_time_steps# need to work out some algorithms for unequal frequency distribution generation, these are enough for an exampleequal_frequencies <-rep(total_beats/8,8)half_frequencies <- equal_frequencies + (equal_frequencies *rep(c(.5,-.5),each = total_notes/2))most_unequal_frequencies <-c(rep(1,(total_notes-1)),(total_notes*(total_notes-1)+1))note_frequency_matrix <-rbind(equal_frequencies, half_frequencies, most_unequal_frequencies)compose_notes <- tibble::tibble(note_id =integer(),note =integer(),beat_on =integer(),note_on =integer(),note_off =integer()) %>%# 1 beat every time_steprowwise() %>%add_row(beat_on =1,note =sample(rep(sample(possible_notes),times = note_frequency_matrix[3,])), ) %>%ungroup() %>%# handle note timesmutate(note_id =1:n(),note_on = (1:n()-1)*note_duration,note_off = note_on+note_duration) %>%filter(beat_on ==1) %>%#pivot to long tidyr::pivot_longer(c("note_on","note_off"),names_to="type",values_to="time") %>%mutate(time = time -lag(time,default=0))## add to a new midi dfnew_midi_df <-create_empty_midi_df() %>%# initializeadd_meta_track_name(name ="My track") %>%add_meta_tempo(tempo =500000) %>%add_meta_time_sig(numerator =4,denominator =4,clocks_per_click =36,notated_32nd_notes_per_beat =8 ) %>%add_program_change(program =0,channel =0) %>%add_control_change(control =0, value =0) %>%#use dplyr::add_row to add a bunch of notesadd_row(i_track =rep(0,dim(compose_notes)[1]), meta =rep(FALSE,dim(compose_notes)[1]),note = compose_notes$note,type = compose_notes$type,time = compose_notes$time,velocity =64) %>%add_meta_end_of_track()#write midi#Initialize new pyramidi objectnew_pyramidi_object <- pyramidi::MidiFramer$new()# update ticks per beatnew_pyramidi_object$ticks_per_beat <-96L# update object with new midi dfnew_pyramidi_object$mf$midi_frame_unnested$update_unnested_mf(new_midi_df)# write to midi filenew_pyramidi_object$mf$write_file("most_unequal.mid")
12-bar blues sequence
I like a good blues scale. Undoubtedly, whatever happens next won’t be bluesy, but that’s ok.
Algorithm components
Sample notes from a blues scale
Eavery bar, randomly sample a euclidean rhythm to fill the bar
randomly assign notes to the beats in the euclidean rhythm
Create a few tracks of this.
try shifting chords, do 12 bar blues
Got it working, nice
Code
all_tracks <-data.frame()# loop for each trackfor(t in1:3) {# note parameters bars <-12*4*4 possible_time_steps <-16 note_duration <-24 possible_notes <-c(60, 63, 65, 66, 67, 70, 72, 75) key_vector <-rep(rep(c(0,0,0,0,5,5,0,0,7,5,0,7),each=possible_time_steps),4*4) compose_notes <- tibble::tibble(note_id =integer(),note =integer(),beat_on =integer(),note_on =integer(),note_off =integer() ) %>%# use euclidean rhythmrowwise() %>%add_row(beat_on =c(replicate( bars,bresenham_euclidean(sample(c(1, 2, 2, 2, 3, 4,5,5,6, 8, 15), 1), possible_time_steps,start =1) )),note =sample(possible_notes,size = possible_time_steps * bars,replace =TRUE) + key_vector ) %>%ungroup() %>%# handle note timesmutate(note_id =1:n(),note_on = (1:n() -1) * note_duration,note_off = note_on + note_duration ) %>%filter(beat_on ==1) %>%#pivot to long tidyr::pivot_longer(c("note_on", "note_off"),names_to ="type",values_to ="time") %>%mutate(time = time -lag(time, default =0))####################### End composition ######################### add composition to a new midi df new_midi_df <-create_empty_midi_df() %>%# initializeadd_meta_track_name(name ="My track") %>%add_meta_tempo(tempo =500000) %>%add_meta_time_sig(numerator =4,denominator =4,clocks_per_click =36,notated_32nd_notes_per_beat =8 ) %>%add_program_change(program =0,channel =0) %>%add_control_change(control =0, value =0) %>%# Composition added hereadd_row(i_track =rep(0, dim(compose_notes)[1]),meta =rep(FALSE, dim(compose_notes)[1]),note = compose_notes$note,type = compose_notes$type,time = compose_notes$time,velocity =64 ) %>%add_meta_end_of_track() %>%mutate(i_track = t) # set current track number all_tracks <-rbind(all_tracks,new_midi_df)}#write midi#Initialize new pyramidi objectnew_pyramidi_object <- pyramidi::MidiFramer$new()# update ticks per beatnew_pyramidi_object$ticks_per_beat <-96L# update object with new midi dfnew_pyramidi_object$mf$midi_frame_unnested$update_unnested_mf(all_tracks)# write to midi filenew_pyramidi_object$mf$write_file("bluesy_A.mid")
I put three midi tracks into ableton, added synth voices and drums, and that’s what it sounds like.
Hmmm, the dplyr style worked out. I didn’t think it would be so easy to get a 12-bar blues thing going. It still sounds like a computer made it, but a computer did make it, so that’s fair.