Scratchpad for messing around with a MINERVA II model in a musical chord context. Notes to self
Author
Matt Crump
Published
January 23, 2024
Show the code
from diffusers import DiffusionPipelinefrom transformers import set_seedfrom PIL import Imageimport torchimport randomimport sslimport osssl._create_default_https_context = ssl._create_unverified_context#locate library#model_id = "./stable-diffusion-v1-5"model_id ="dreamshaper-xl-turbo"pipeline = DiffusionPipeline.from_pretrained( pretrained_model_name_or_path ="../../../../bigFiles/huggingface/dreamshaper-xl-turbo/")pipeline = pipeline.to("mps")# Recommended if your computer has < 64 GB of RAMpipeline.enable_attention_slicing("max")prompt ="Minerva robot of wisdom. binary codes. thundercat cartoon. music. music theory. conceptual. reverberation. resonance. colorful."for s inrange(30):for n in [5,10]: seed = s+21 num_steps = n+1 set_seed(seed) image = pipeline(prompt,height =1024,width =1024,num_images_per_prompt =1,num_inference_steps=num_steps) image_name ="images/synth_{}_{}.jpeg" image_save = image.images[0].save(image_name.format(seed,num_steps))
Minerva robot of wisdom. binary codes. thundercat cartoon. music. music theory. conceptual. reverberation. resonance. colorful.
This is a scratchpad post with R code to explore some esoteric computational modeling ideas. I want to get coding, but will put a bit of context around this.
My plan is to take the chord vector space I made yesterday (see last post), and put it into the memory of a MINERVA-II model. Then, I’m going to probe the model with various input patterns, and see what comes out.
I wish I had time to review MINERVA-II in more depth here, but I don’t. Very quickly, MINERVA-II is an instance-based model of human memory processes by Douglas Hintzman (Hintzman 1984). This model was inspired by Richard Semon’s memory theory (Semon 1923), which I find very poetic. Semon made up his own terms so that he could more precisely state his theoretical ideas, including words like engram, engraphic, and homophony.
The basic idea is that people store the patterns of individual experiences in memory. And, a current pattern can retrieve old memories by similarity. MINERVA-II uses a resonance metaphor. A pattern is presented to a memory system. The pattern activates all of the traces in memory, in proportion to their similarity to the pattern. In this way, memory is call and response process. The pattern of the present moment resonates with the memory system bringing forth a chorus of activated traces. This memory response is called the echo. The resonance between the structure of the pattern in the present moment and similar traces from the past is what Richard Semon called homophony. I have some lecture material on these concepts in my intro to cognition course.
Now onto the R code. Memory is the chord vector matrix. I can “probe” the model by giving it any feature vector as an input. The input probe activates every chord in memory by it’s similarity (using the vector cosine). The memory responds as a similarity weighted sum. All of traces are multiplied by their similarity, and then summed up into a single feature vector, called the echo.
library(tidyverse)# pre-processing to get the chord vectors# load chord vectorsc_chord_excel <- rio::import("chord_vectors.xlsx")# grab feature vectorsc_chord_matrix <-as.matrix(c_chord_excel[,4:15])# assign row names to the third column containing chord namesrow.names(c_chord_matrix) <- c_chord_excel[,3]# define all keyskeys <-c("C","Db","D","Eb","E","F","Gb","G","Ab","A","Bb","B")# the excel sheet only has chords in C# loop through the keys, permute the matrix to get the chords in the next key# add the permuted matrix to new rows in the overall chord_matrixfor (i in1:length(keys)) {if (i ==1) {# initialize chord_matrix with C matrix chord_matrix <- c_chord_matrix } else {#permute the matrix as a function of iterator new_matrix <-cbind(c_chord_matrix[, (14-i):12],c_chord_matrix[, 1:(13-i)] )# rename the rows with the new key new_names <-gsub("C", keys[i], c_chord_excel[,3])row.names(new_matrix) <- new_names# append the new_matrix to chord_matrix chord_matrix <-rbind(chord_matrix,new_matrix) }}
Each chord is represented as a vector with 12 features, corresponding to each of the 12 possible notes. If a note is in a chord, then the note feature gets a 1 in the vector. All other features are set to 0.
The first 10 rows look like this:
Show the code
knitr::kable(c_chord_excel[1:10,])
key
type
item
C
Db
D
Eb
E
F
Gb
G
Ab
A
Bb
B
C
key
C note
1
0
0
0
0
0
0
0
0
0
0
0
C
scale
C major scale
1
0
1
0
1
1
0
1
0
1
0
1
C
triads
C major triad
1
0
0
0
1
0
0
1
0
0
0
0
C
triads
C minor triad
1
0
0
1
0
0
0
1
0
0
0
0
C
triads
C6
1
0
0
0
1
0
0
1
0
1
0
0
C
triads
Cm6
1
0
0
1
0
0
0
1
0
1
0
0
C
triads
C (add 9)
1
0
1
0
1
0
0
1
0
0
0
0
C
triads
Cm (add 9)
1
0
1
1
0
0
0
1
0
0
0
0
C
dominant 7th
C7
1
0
0
0
1
0
0
1
0
0
1
0
C
dominant 7th
C9
1
0
1
0
1
0
0
1
0
0
1
0
The vector space includes one feature vector for all of the following chords and scales:
C note, C major scale, C major triad, C minor triad, C6, Cm6, C (add 9), Cm (add 9), C7, C9, C9(#11), C9 (13), C13 (#11), C∆7, C∆9, C∆9(#11), C∆9(13), C∆7(#5), C∆7(b5), Cm7, Cm9, Cm11, Cm7(11), Cm13, Cm13(#11), Cm7(b5), Cm9(b5), Cm11(b5), Csus, C7sus, C9sus, C13sus, C7susb9, C13susb9, C7(b5), C7(#5), C7(b9), C7#9b5, C7#9#5, C7b9b5, C7b9#5, Cm∆7, Cm∆9, Cdim, Co7, Cdim(∆7), C7#11#9, C chromatic, C whole-tone, C major pentatonic, C minor pentatonic, C Ionian, C Dorian, C Phrygian, C Lydian, C Mixolydian, C Aeolian, C Locrian, C maj 6th diminished, C melodic minor, C half-step/whole-step, C whole-step/half-step, C Blues
The above shows everything in the key of C. The matrix contains all of the above in all of the keys. For a total of 756 patterns to be stored in the memory matrix.
In the next sections I’ll be giving this model an input pattern as a “reminder cue”, and then computing what the model “remembers” based on the cue. This is a way of asking about associations or expectations between one pattern and a history of other patterns. The answers the model gives back are entirely dependent on the nature of the memory traces.
The current set of chord vectors is very unlike my own musical experience. If I tried to capture my own musical experience as a series of individual traces, I would be inputting one feature vector for every chord, note, scale, or let’s say short phrase, that I have ever played in my entire life. That collection of traces would be severely biased in terms key, as I way over played things in CFGDA in my life.
The chord vector space I’m using here is more like an uniform agent who played every chord and scale equally frequently in all keys. So, the expectations returned by the model are in relation to that kind of unbiased musical history.
MINERVA II modeling
Probe with a C note
The following code shows the basic steps in probing the memory with a cue pattern. I used a C note, which is coded as a single 1, followed by 11 zeros.
The cosine similarity between the probe and all patterns in memory is computed. There is a possibility of “tuning” the similarities by raising them to an exponent, but I’ll talk about that later.
The individual patterns in memory are multiplied by their similarity to the probe. This allows the cue to selectively retrieve memories that contain features in the cue. For example, traces that have 0 similarity to the cue will be multiplied by 0, and thus eliminated from the echo. The echo is produced by summing the similarity weighted traces.
The values in the echo are additive and can get very large. In the last step I divide all of the values in the echo by the maximum value to squish them between 0 and 1.
Show the code
# Try minervamemory <- chord_matrix# probe with a C# Each of the 12 spots is a note, starting on Cprobe <-c(1,0,0,0,0,0,0,0,0,0,0,0)# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^1echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb G
1.0000000 0.2186057 0.4809454 0.4785718 0.4217287 0.5377400 0.3701894 0.5377400
Ab A Bb B
0.4217287 0.4785718 0.4809454 0.2186057
I like to think of the echo as the reminder values. Given the model hears a C, it is reminded of things that have a C in them. The echo is a similarity weighted sum of all of those things.
Full cacophony
A couple short detours before going in a more musical direction with this.
The echo in MINERVA-II is the concept that memory retrieval acts like a chorus of singers, where each singer is an individual memory trace.
Consider what would happen if memory was totally unselective and everything was retrieved all at once.
In the model, this would be like hearing all of the chords in the memory all at once. This can be represented by summing every trace together into one echo. This is the same as summing down the columns of the matrix like so:
Show the code
colSums(memory)
C Db D Eb E F Gb G Ab A Bb B
339 339 339 339 339 339 339 339 339 339 339 339
Each note appears 338 times across all of the chords in memory. If they were all played at once, the echo would sound like every note played simultaneously with a loudness of 338. If I normalize the echo by dividing by 338, I’d get all 1s, which would be saying play all the notes at 100% amplitude. In other words, if memory reminded you of everything, everywhere, all at once, it sounds like full cacophony.
The sound of unbiased memory from C
MINERVA-II allows for selective retrieval of prior memories. The primary mechanism is that a probe pattern activates memories by similarity.
In this example, I apply the ceiling() function to the similarities and transform any positive value to 1, and leave the 0s at 0.
I’m using the C probe, so any chord pattern that has a C element in it will get a 1, and any chord pattern that does not have a C in it will get a 0.
I calculate both the echo and the normalized echo.
Show the code
# Try minervamemory <- chord_matrix# probe with a C# Each of the 12 spots is a note, starting on Cprobe <-c(1,0,0,0,0,0,0,0,0,0,0,0)# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# force similarities to 0 or 1activations <-ceiling(similarities)echo <-colSums((memory*c(activations)))echo
C Db D Eb E F Gb G Ab A Bb B
339 82 173 170 149 191 132 191 149 170 173 82
Show the code
echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb G
1.0000000 0.2418879 0.5103245 0.5014749 0.4395280 0.5634218 0.3893805 0.5634218
Ab A Bb B
0.4395280 0.5014749 0.5103245 0.2418879
The first echo is basically co-occurrence context vector containing the co-occurrence frequency between C and the other notes, counting across all of the patterns in memory. The second echo is the same information, just in terms of proportion to the largest value.
C always co-occurs with itself. C co-occurs next most often with G and F, and then Bb and D etc.
This echo is not as cacophonous as hearing every single chord in memory played at the same time. However, i’m guessing this would still sound pretty cacophonous, as it is the sound of about 338 patterns that all contain a C played at the same time.
At some point, hopefully today, I’d like to synthesize tones using these echo values for note amplitude and hear what they sound like.
Increasingly selective echoes of C
MINERVA-II has a few options for controlling how many memories get added into the echo. After computing similarities between the probe and memory traces, the similarities can be raised to a power before weighting the traces. As the exponent increases, smaller similarity values get squashed into 0 and become effectively 0. Larger similarity values remain proportionally larger. Perfectly similar traces remain at 1 regardless of the exponent.
The bottom line is that as the power is raised, fewer traces (only the most similar) are allowed to contribute to the echo. As a result, the echo becomes much less cacophonous.
The code below shows what happens when the probe is a C, and the exponent is raised to 1, 3, 11, and 51.
When the exponent is small, C is the loudest feature in the echo, but many other notes have some loudness too.
When the exponent is increased, the C remains the loudest, but the other notes get softer.
In the case of this vector space, driving up the exponent really, really high, essentially causes only the identical patterns in memory to be retrieved. In the extreme, the C retrieves itself, and there are not other sounds of co-occurrence.
Show the code
# Try minervamemory <- chord_matrix# probe with a C# Each of the 12 spots is a note, starting on Cprobe <-c(1,0,0,0,0,0,0,0,0,0,0,0)# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)echo <-colSums((memory*c(similarities^1)))echo/max(abs(echo))
C Db D Eb E F Gb G
1.0000000 0.2186057 0.4809454 0.4785718 0.4217287 0.5377400 0.3701894 0.5377400
Ab A Bb B
0.4217287 0.4785718 0.4809454 0.2186057
C Db D Eb E F Gb
1.00000000 0.00556662 0.01662112 0.02554962 0.02060926 0.02637050 0.01847203
G Ab A Bb B
0.02637050 0.02060926 0.02554962 0.01662112 0.00556662
C Db D Eb E F Gb G Ab A Bb B
1 0 0 0 0 0 0 0 0 0 0 0
Messing around
I’m using this code block to try different probe patterns and see what happens. In general, the echo contains the elements of the probe, and then partial activation of other elements in approximate orders that seem to make musical sense.
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <- chord_matrix['C major triad',]# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^3echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))sort(echo,decreasing =TRUE)
G C E D A Bb F B
1.0000000 0.9654616 0.9301669 0.6300119 0.6298521 0.4878587 0.4449117 0.4129313
Gb Eb Db Ab
0.3429289 0.3350617 0.3179172 0.3120386
Adding probes together
Let’s say one is playing a Dm7 chord in the left hand, and a G as part of a melody line. A probe could be constructed by adding together the vector for Dm7 and G. I’m also sorting the echo by feature intensity. I wonder if the order of notes in the echo could work for figuring out which scales to play over what chords and situations.
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <-colSums(chord_matrix[c('Dm7','G note'),])# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^3echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))sort(echo,decreasing =TRUE)
C D G F A E Bb Eb
1.0000000 0.9959211 0.9949257 0.9327975 0.9327975 0.5428026 0.5428026 0.4500104
B Ab Gb Db
0.4413201 0.3122667 0.2901019 0.2713445
Discrepancy
The echo contains partial activations of non-probe features. These in some sense represent an expectation about what elements usually co-occur with the probe features in the stored memory traces.
It may be interesting to compute a discrepancy vector, which is a difference between the pattern in the probe and the echo.
These differences in expectation might be interesting to think about in terms of musical tension and resolution.
random notes:
subtraction introduces negative values and negative similarity
which way to subtract? probe-echo or echo-probe
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <- chord_matrix['C note',]# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^1echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb G
1.0000000 0.2186057 0.4809454 0.4785718 0.4217287 0.5377400 0.3701894 0.5377400
Ab A Bb B
0.4217287 0.4785718 0.4809454 0.2186057
Show the code
discrepancy <- echo-probediscrepancy
C Db D Eb E F Gb G
0.0000000 0.2186057 0.4809454 0.4785718 0.4217287 0.5377400 0.3701894 0.5377400
Ab A Bb B
0.4217287 0.4785718 0.4809454 0.2186057
In this case the discrepancy vector has activation across all notes except C. Although the activation is not uniform, this discrepancy vector is similar to the chromatic scale, which is all of the notes.
Submitting the discrepancy vector as a probe to memory, and then listing the top 10 most similar traces in memory as a way to interpret the vector in terms of the chord patterns.
Show the code
probe <- discrepancy# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)df <-data.frame(chords =row.names(similarities),similarities = similarities) %>%arrange(desc(similarities))df[1:10,]
chords similarities
C chromatic C chromatic 0.9281836
Db chromatic Db chromatic 0.9281836
D chromatic D chromatic 0.9281836
Eb chromatic Eb chromatic 0.9281836
E chromatic E chromatic 0.9281836
F chromatic F chromatic 0.9281836
Gb chromatic Gb chromatic 0.9281836
G chromatic G chromatic 0.9281836
Ab chromatic Ab chromatic 0.9281836
A chromatic A chromatic 0.9281836
Adding the echo to probe. After hearing a note the model retrieves the echo as a response. In this new moment the original note and the retrieved chorus are a new combined probe.
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <- chord_matrix['C note',]# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^1echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb G
1.0000000 0.2186057 0.4809454 0.4785718 0.4217287 0.5377400 0.3701894 0.5377400
Ab A Bb B
0.4217287 0.4785718 0.4809454 0.2186057
Show the code
# add echo to probeprobe <- probe+echo# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)activations <- similarities^1echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb G
1.0000000 0.6316519 0.7881949 0.7720207 0.7313981 0.8173276 0.6973141 0.8173276
Ab A Bb B
0.7313981 0.7720207 0.7881949 0.6316519
This is like having steps of iterative retrieval. A variation is to get the retrieved echo and then submit the echo as the probe. What happens is that the echo fills up with more general co-occurrence information.
Echo meaning
The echo is a feature vector in the same space as the chords. In general, the echo will contain more activation across all elements compared to any individual chord. This is because the echo sums over many chords, and typically sums over enough chords that all notes end up in the sum.
In this sense, the fact that an echo usually has partial activation across all notes, makes the pattern in the echo similar to the chromatic scale, which has all notes. This is not a particularly interesting or nuanced meaning of the echo. If the echo was all 1s, then it would be the chromatic scale.
The activation values in the echo depend on the activation function that raises similarity to a power. A given echo can be interpreted in terms of the original chord vectors by calculating similarity between the echo and all of the chords, and then looking at the chords that are most similar. When the exponent is small, the most similar chords returned are all the chromatic scales (which are identical), and other chords with lots of notes in them.
As the exponent is raised higher, the pattern in the echo grows more similar to the probe pattern (with some extra activations), and the echo becomes similar to different patterns of chords, eventually honing in on the same ordering as the probe pattern would have.
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <- chord_matrix['C note',]# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^9echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb
1.00000000 0.02266201 0.06354290 0.08637707 0.07279614 0.09085135 0.06428662
G Ab A Bb B
0.09085135 0.07279614 0.08637707 0.06354290 0.02266201
chords similarities
C note C note 0.9734631
Csus Csus 0.6641514
C minor triad C minor triad 0.6616367
F major triad F major triad 0.6616367
Adim Adim 0.6591220
C major triad C major triad 0.6540038
F minor triad F minor triad 0.6540038
Ab major triad Ab major triad 0.6514892
A minor triad A minor triad 0.6514892
Gsus Gsus 0.6488032
Fsus Fsus 0.6488032
Cdim Cdim 0.6467066
Gbdim Gbdim 0.6467066
F (add 9) F (add 9) 0.6172144
Cm6 Cm6 0.6150366
F7 F7 0.6150366
Am7(b5) Am7(b5) 0.6150366
Fm (add 9) Fm (add 9) 0.6106042
C6 C6 0.6084264
F∆7 F∆7 0.6084264
In this variation the echo is submitted as the probe to generate a second echo. The first echo is already very general because even a single C note is in many chords. The second echo is extremely general because it has positive similarity to all chords. The values in the second echo can be thought of as reflecting very general expectations about note co-occurrence. The first echo also has some of these very general expectations.
What happens here is some proportion of the second echo, which represents super general features, is subtracted from the first echo. This allows the first echo to reflect more nuanced and specific expectations given the probe.
It seems necessary to turn this into a shiny app or something, where the parameters can be wiggled around as a way to explore whether there are interesting things going on.
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <- chord_matrix['Dm7',]# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^3echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))echo
C Db D Eb E F Gb G
0.9938232 0.2498014 1.0000000 0.4246298 0.5052113 0.9778738 0.3572633 0.7253382
Ab A Bb B
0.3626253 0.9668927 0.4883566 0.4300161
Show the code
# add echo to probeprobe <- echo# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)activations <- similarities^3second_echo <-colSums((memory*c(activations)))second_echo <- second_echo/max(abs(second_echo))second_echo
C Db D Eb E F Gb G
0.9957913 0.5074726 1.0000000 0.6452400 0.7262229 0.9401410 0.5719504 0.9369867
Ab A Bb B
0.5766082 0.9400520 0.7190890 0.6489396
Show the code
# subtract a weighted portion of second echomore_specific_echo <- echo-(.8*second_echo)echo_similarities <- RsemanticLibrarian::cosine_x_to_m(more_specific_echo,memory)df <-data.frame(chords =row.names(echo_similarities),similarities = echo_similarities) %>%arrange(desc(echo_similarities))df[1:20,]
chords similarities
Dm7 Dm7 0.8371172
F6 F6 0.8371172
D minor triad D minor triad 0.7391103
F major triad F major triad 0.7358684
Dm7(11) Dm7(11) 0.7270672
D minor pentatonic D minor pentatonic 0.7270672
F major pentatonic F major pentatonic 0.7270672
G9sus G9sus 0.7270672
Dm9 Dm9 0.6810277
Bb∆9 Bb∆9 0.6710651
F (add 9) F (add 9) 0.6130494
Dm11 Dm11 0.6019057
G13sus G13sus 0.6019057
G7sus G7sus 0.5982104
C13sus C13sus 0.5928111
Gm11 Gm11 0.5928111
F9 (13) F9 (13) 0.5890195
D7sus D7sus 0.5873095
D Blues D Blues 0.5832278
Dm (add 9) Dm (add 9) 0.5643833
Lots of breadcrumbs here to follow up on later. Ultimately, I didn’t get close to what I was hoping to accomplish. Making a note here that it would be interesting if this type of analysis could provide insight into chord movement from one to another.
Some code for listening to echoes in terms of complex tones made from sine waves, with sine wave amplitude for each note set by the echo intensity for each note.
need to explore this
Show the code
library(tuneR)# Function to generate a complex tonegenerate_complex_tone <-function(duration, sampling_rate, frequencies, amplitudes) { time_points <-seq(0, duration, 1/sampling_rate) complex_tone <-sapply(seq_along(frequencies), function(i) { amplitudes[i] *sin(2* pi * frequencies[i] * time_points) })return(rowSums(complex_tone))}# Set parametersduration <-5# secondssampling_rate <-44100# Hz (standard audio sampling rate)frequencies <-c(261.63,277.18,293.66,311.13,329.63,349.23,369.99,392,415.3,440,466.16,493.88) # frequencies of sine waves in Hz# Try minervamemory <- chord_matrix# probe with a C# Each of the 12 spots is a note, starting on Cprobe <- chord_matrix['C note',]# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^5echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))amplitudes <- echo # amplitudes of sine waves# Generate complex tonecomplex_tone <-generate_complex_tone(duration, sampling_rate, frequencies, amplitudes)complex_tone <- complex_tone/max(abs(complex_tone))complex_tone <- complex_tone*32767wave <-Wave(left = complex_tone, right = complex_tone,samp.rate = sampling_rate, bit =16)#writeWave(wave,"test.wav")
experiment graveyard of fun ideas
extruding a subtracted echo from the middle out
Show the code
# Try minervamemory <- chord_matrix# probe using row namesprobe <- chord_matrix['C7',]for(i in1:10){# compute similarities between probe and all tracessimilarities <- RsemanticLibrarian::cosine_x_to_m(probe,memory)# tuning function: raise similarities to a power activations <- similarities^3echo <-colSums((memory*c(activations)))echo <- echo/max(abs(echo))#echo# add echo to probesubtracted_echo <- echo-probe +rnorm(12,0,.05)subtracted_echo <- subtracted_echo/max(abs(subtracted_echo))subtracted_echo <- subtracted_echo^5echo_similarities <- RsemanticLibrarian::cosine_x_to_m(subtracted_echo, memory)df <-data.frame(chords =row.names(echo_similarities),similarities = echo_similarities) %>%arrange(desc(echo_similarities))next_chord <- df$chord[sample(1,1)]#probe <- chord_matrix[next_chord,]probe <- subtracted_echoprint(next_chord)}
Hintzman, Douglas L. 1984. “MINERVA 2: A Simulation Model of Human Memory.”Behavior Research Methods, Instruments, & Computers 16 (2): 96101. https://doi.org/10.3758/BF03202365.
Semon, R. 1923. “Mnemic Psychology (b. Duffy, Trans.).”Concord, Ma: George Allen & Unwin.(original Work Published 1909).