Quantcast
Channel: Virtual Lab

Assignment due Thursday, December 4

0
0

1. Group project: Write as much as possible of the structured abstract report. Sections 1 and 2 should be easy to fill-in now. Decide how the work will be divided between group members. Remember that each student should contribute equally and that all students are required to contribute at least one sub-section in the results section. To facilitate the collaborative writing, I suggest that you copy and paste the structured abstract report template on your group project page. Remember to save a copy on your personal computer as you are working on it. The completed report will due Thursday, December 11 (end of Reading period). To make sure this goal is achieved, all data collection should be completed and groups should plan to meet to format the data and divide up the remaining work at the latest by this Friday, December 5.

2. Individual project: Write a short summary of today’s progress report and post it on your individual project page. In the spirit of collaborative learning, take some time to read your colleagues’ newest posts and post comments/suggestions/questions. You should also follow-up on the individual feedback/suggestions you got today. I will post specific requirement for the final presentations later this week.


Musical Rhythms, Memory, and Human Expression

0
0

Musical Rhythms, Memory, and Human Expression

Ryan Davis, Angie Fuentes, and Kyle Yoder

Yale University, Cognition of Musical Rhythm, Virtual Lab

 

1. BACKGROUND AND AIMS

1.1  Introduction

The emotional properties of music, long recognized by music theorists, composers, and casual listeners alike, have yet to be fully explored by cognitive scientists. We do know that miniscule variations in timing between notes, called microtiming, are used by musicians to make their music sound more expressive; indeed, people listening to music that is played without microtiming often report that it sounds mechanical. Memory researchers have also demonstrated that emotional valence and social context strongly impact individuals’ ability to recall events. Our research seeks to explore the intersection of these two paths of research. [Kyle]

1.2  Previous Research

In 2008, Swedish researchers Juslin and Vjästfäll conducted a large review of the research into the connections between music and emotion. Despite the widely accepted belief that the two are inextricably linked, these researchers found that the evidence was not sufficient to describe the mechanism by which music could elicit the same emotions in different persons. They proposed a multipart mechanism that they believed could account for these emotional responses. One aspect of this mechanism was musical expectancy and rhythm.

Research has revealed that one major component of listeners’ ability to ascribe emotional valence to music is subtle variations in timing between notes in that music. These variations, called microtiming, are employed by musicians (consciously and unconsciously) in order to give their performance an expressive quality (Ashley, 2002; Repp, 1999). Indeed, most quantization software, meant to make computer-generated music sound “more human,” operates by inserting microtiming variations into the piece in order to make it less perfect and, hopefully, more expressive.

Much research into memory has also focused on the effect of emotion. Research has found that not only are memories with some sort of emotional content more likely to be retained and more easily recalled in the future, but also that memories with a social context show this effect even more robustly (Coppola et al., 2014; Jhean-Larose et al., 2014; Watts et al., 2014). In fact, researchers have found that direct administration of oxytocin, a neuropeptide often associated with feelings of attachment and prosociality, can provide participants with enhanced memory for otherwise non-emotional information (Weigand et al., 2013). Furthermore, memories of neutral events are often overshadowed by those of closely occurring emotional events (Watts et al., 2014).

Some research has been done into the intersection of musical rhythm and memory. Balch and Lewis (1996) found that hearing a familiar rhythm could facilitate participants’ memories of events that were happening when they last heard the same rhythm. Drake et al. (2000) compared how well musicians and nonmusicians could synchronize with human-generated pieces containing microtiming and how they well they could with computer-generated pieces played precisely as written. While they found that participants were better at synchronizing with the computer-generated pieces, they also found that they synchronized with the human-generated (that is, expressive) pieces at slower levels, at a narrower range of levels, and more correspondingly to the theoretically correct metrical hierarchy. They concluded that microtiming might transmit a particular metrical interpretation to the listener and enable the perceptual organization of events over a longer time span (Drake et al., 2000).

The present study seeks to build off of this research by exploring whether the microtiming variations and the expressive quality of the performance are sufficient to elicit these differences in cognitive processing, or if participants’ beliefs about the social context of the music may mediate these effects. [Kyle]

1.3  Present Research

In this study, we attempt to observe if the ease with which participants can recall a musical rhythm is impacted by their beliefs as to whether that rhythm was produced by a human or a computer. By testing participants in three separate belief groups – the rhythms were created by a human, the rhythms were created by a computer, or no specification about the origin of the rhythm – we hope to be able to detect differences in the accuracy of rhythmic memory as a result of belief group. We predict that those who believe the rhythms were created by a human will perform better at the rhythmic memory task. [Angie]

2. METHOD

2.1  Participants

In total, 42 participants (25 female and 17 male) completed the study.  They ranged in age from 19 to 59 years old, with the mean age being 28.8 years of age (standard deviation=11.8 years).  All but three participants recorded English as their first language (the first language of 2 participants is Spanish and of 1 participant is French).  Thirty-two of the participants had at least 1 year of musical training, with 13 of these participants having at least 10 years of training.  Also, most of the participants play at least one instrument.  Four participants reported having some sort of hearing deficiency, either ringing in their ears or mild to moderate hearing loss. [Angie]

2.2  Stimuli

Our stimuli were brief, three-bar rhythmic samples in 4/4 time. We divided our rhythms into two groups based on difficulty, which we named Simple and Complex. To accommodate our desired number of participants in the experiment, it was decided that each participant would undergo eight trials, meaning that four Simple rhythms and four Complex rhythms were constructed. However, each respective rhythm had its own alternate version, a version that was only subtly altered, to enhance the experiment. As a result, there were 16 different rhythms in total. Each rhythmic sample was randomized in tempo (using an online random number generator), ranging between 70bpm and 90bpm, yet each alternate version rhythm carried the exact tempo of its original. This tempo range was chosen as it is commonly regarded as middle ground between slow and fast.

The Simple rhythms were constructed using only dotted half notes, half notes, quarter notes, and eighth notes. There were no syncopations in the Simple rhythms. The Complex rhythms were constructed adding sixteenth notes, dotted eighth notes, dotted quarter notes, and ties, thus creating syncopations. The rhythms were designed to be varied in content, and each alternate version’s subtle change was evenly spaced between rhythmic samples to avoid predictability. The subtle changes were done by either changing a rhythmic value (e.g., a quarter note becoming two eighth notes) or flipping a rhythmic cell (e.g., a quarter note and two eighth notes becoming two eighth notes and a quarter note).

The rhythmic stimuli were recorded by Michael Laurello, a composition student at the Yale School of Music, using Apple Logic Pro 9.1.8 and a “roto tom” sample sound from the Vienna Symphonic Library. Michael recorded each rhythm using 0%, 50% and 100% quantization, and it was decided that 50% was a true balance of rhythmic strictness and performance flexibility. 50% quantization was used for each rhythm throughout the entire experiment. [Ryan]

2.3  Task & Procedure

Participants were randomly introduced to eight of the rhythms (either simple or complex) , via one playing of the recording, and were asked to try and memorize what they heard. The participant was either informed 1) nothing 2) that the recording that they heard was done by a human percussionist 3) that the recording was done by a computer. Following a distractor task (word puzzles), the participant would then either be played the identical rhythm that they heard before the distractor task, or its alternate version. The participant would then be asked if what they heard the second time was the same or different from the first rhythm. [Ryan]

2.4  Data Collection & Analysis

Data was collected through Qualtrics survey website and exported into Microsoft Excel for analysis.  The data was analyzed by looking for potential effects of each participant’s belief condition on their ability to correctly identify whether they were given the same or different rhythms within each trial.  We also conducted limited analysis to discover any effects that demographics may have played on correct identification. [Kyle]

3. RESULTS

3.1 Population Sample

Forty-two participants (25 female and 17 male) were recruited via email and Facebook posts advertising the study.  Participants were all between 19 and 59 years of age (mean age = 28.79, standard deviation = 11.95, median age = 23.00) and all had completed at least a high school level of education.  Ten participants reported being unable to play a musical instrument, while the remaining thirty-two reported at least one year of experience playing: ten (23.80% of the total sample) reported playing primarily the piano, seventeen (40.47%) reported playing a string instrument (i.e., cello, violin, viola, or guitar), and four (9.52%) reported playing a woodwind or brass instrument.  Only one participant reported playing percussion. The number of years of training varied widely among these participants, with the most experienced player having performed on the  (mean = 7.38, standard deviation = 6.35, median = 7.00).  On a five-point scale (1 = no training, 5 = professional training), participants generally reported average familiarity with Western music training in either instrumental performance, vocal performance, and music theory(mean = 2.38, standard deviation = 1.41), while five participants reported a professional level of overall training.  Of the forty-two participants, four reported having some kind of mild hearing deficiency (two reported ringing, two reported mild hearing loss); however, all four reported being able to hear clearly the stimuli used in this study. [Kyle]

3.2  Analysis & Figure 1

Across all belief groups, participants performed better when the rhythm presented after the distraction was the same rather than when the rhythm presented after the distraction was different.  In other words, participants more often reported that the rhythm following the distraction was the same rather than a different rhythm.  This is true for all belief groups, as shown in Figure 1.  Combining all belief groups, 65.25% of participants answered correctly when the rhythm was the same (standard deviation=.0654), while 54.7% of participants answered correctly when the rhythm was different (standard deviation=.0314). This may be evidence that people tend to think rhythms are the same and are not particularly good at detecting minor differences between them.  This also may be evidence that the word puzzle distraction was too time-consuming or difficult and required much thought. [Angie]

bigger music graph

Figure 1.

3.3  Analysis & Figure 2

Figure 2 shows the Simple Rhythms and Complex Rhythms that were used in the experiment. The top rhythm of each grouping is the original form, while the bottom is its subtly altered version. Within a singular trial, participants either heard the top rhythm of each grouping two times (with the recording playings separated by word puzzle distractions) meaning the correct answer was that the rhythms were identical OR participants heard the top rhythm first, followed by the bottom rhythm second (with the recording playings separated by word puzzle distractions) meaning that the correct answer was that the rhythms were not identical.

Experiment_SimpleRhythms

 Experiment_ComplexRhythms

Figure 2.

From a visual standpoint, it is immediately clear that the Complex Rhythms are indeed more difficult than the Simple Rhythms, due to the increased number of audible attack points. The Simple Rhythms ranged from 12 to 15 audible attacks, with an average of 13.125. The Complex Rhythms ranged from 17 to 21 audible attacks, with an average of 18.875. The increased number of attack points would naturally lead one to believe that it is more difficult to remember more information, especially given that our participants only heard each rhythm played one time. However, in general, our participants did not have an exceedingly strong score in identifying whether the second rhythm played (be it simple or complex) was the same or different than the first rhythm played. There are many possible reasons for this outcome, yet with our sample size it is impossible to determine any exact answers. The most obvious possible reason is that the rhythmic information was simply too long to retain after only one playing. This time gap was only reinforced by a following series of word puzzle distractions. In addition, the alternate versions of each rhythm were intentionally designed to be subtly different. The rhythmic differences were by no means significant, and according to our analysis, even those who identified themselves as musical experts were not remarkably superior in their trials. [Ryan]

[3.4  Analysis & Figure 3

As mentioned in section 3.1, five participants (4 male and 1 female) identified themselves as having a professional level of overall music training. These “expert” participants ranged in age from 22 to 36 (mean = 26.4, standard deviation = 5.68), each reported a different instrument as their primary (respectively: cello, clarinet, piano, viola, and violin), and all reported a minimum of ten years experience playing their instrument. We decided to examine whether these “experts” were significantly better at the task of identifying the rhythms than the general pool of participants.

Significance across conditions is impossible to show in this analysis, as three of the expert participants were randomly assigned to the computer-belief condition, while only one each was assigned to the human and no belief conditions. Taken as a whole, it appears that experts may be better than the general group of participants at correctly identifying the rhythms; however, due to the relatively small sample size of this group, these results are not significant (p>0.05). This is easily seen in Figure 3 below, which shows the average rate of correct responses to the rhythm identification rate in the expert and general samples. [Kyle]

Figure 3

Figure 3.

4. CONCLUSIONS

Our results do not reveal any impact of belief group on how participants’ ability to recall a rhythm. We predicted that participants would be able to better recall a rhythm if they believed it was performed by a human. Although there were minor differences in accuracy of rhythm recall between the three groups, no significant effect was demonstrated. Participants performed slightly better in the “no belief” group than in the other two groups, while the “computer-generated” belief group performed slightly worse than the other groups.

Similarly, no significant effect was found with regards to music training and participants’ ability to correctly complete the rhythm recognition task.  Nevertheless, the data trends that direction, providing basis for the hypothesis that, were more participants to be included in the study, this effect could be found to be significant.  This distinction is important because it provides insight into whether the rhythms used in this study were too complex for the average person to remember after listening only once.  Perhaps further research will reveal a “complexity threshold” for musical memory.

An unexpected finding from this study was that people tended to perform better in determining that a rhythm was the same rather than determining that a rhythm was different. However, further experimentation is necessary to determine whether this finding reflects an actual facet of human cognition. In this pilot study, it is possible that the changes in rhythms were simply too subtle for participants to detect.  Another possibility is that participants defaulted to saying that rhythms were the same, producing a “false positive” for this effect.

Although the findings of this pilot study did not provide major evidence for answering our question about the interplay of emotion, belief, and memory, they did provide guidance for future experimentation exploring the same topic. One limitation of using Qualtrics to collect data was that, instead of being asked to replicate the rhythm, our participants were given a task using a “same-different” paradigm. In other words, participants had a 50% chance of guessing the correct answer, potentially allowing correct guesses to skew our results. If subjects were required to recreate the rhythm–perhaps by tapping it–one would be able to more accurately determine if they had remembered the rhythm correctly.

A similar study in future would perhaps yield more revealing data if the selected rhythms were shorter in length. It would be of interest to determine if participants’ success in determining whether a rhythm was the same or different could be influenced by the actual percussive sound(s) used. For example, would it be easier to distinguish the rhythms if the chosen stimuli sound had a discernible pitch, or even multiple pitches? In addition, the combinations of different time signatures could provide further insight.

Another limitation of this study was that it was not randomized whether the second rhythm presented in a trial was the same or different. Whether the rhythm heard after the word puzzle was the same or different was predetermined. We tried to minimize bias by randomizing the order in which participants saw the trials; however, we were unable to randomly assign the rhythm after the word puzzle to be the same or different. This further randomization would have eliminated any possible bias of certain rhythms being more distinctive and easier to find differences in.

The subject of belief and memory is an interesting topic that still requires much experimentation to be fully understood. With this study, we hoped to provide a foundation and springboard for future endeavors in this area. In moving forward in researching belief and memory, it is necessary to run more experiments testing their relationship and think of new methods in which one can examine how belief affects memory. Suggestions for future studies would include replication, rather than recognition, of a rhythm, and varying the distraction difficulty and length between rhythm recognition. [Angie, Kyle, Ryan]

 

REFERENCES
[Kyle]

Ashley, R. (2002).  Do[n’t] Change a Hair for Me: The Art of Jazz Rubato. Music Perception, 19:3, 311–332.

Balch, W.R., & Lewis, B.S. (1996). Music-Dependent Memory: The Roles of Tempo Change and Mood Mediation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22:6, 1354-1363.

Coppola, G., Ponzetti, S., & Vaughn, B.E. (2014). Reminiscing Style During Conversations About Emotion-laden Events and Effects of Attachment Security Among Italian Mother–Child Dyads. Social Development, 23:4, 702–718. DOI: 10.1111/sode.12066.

Drake, C., Penel, A., & Bigand, E. (2000). Tapping in Time with Mechanically and Expressively Performed Music. Music Perception, 18:1, 1-23.

Jhean-Larose, S., Leveau, N., & Denhie`re, G. (2014). Influence of emotional valence and arousal on the spread of activation in memory. Cognitive Processing, 15, 515–522. DOI 10.1007/s10339-014-0613-5.

Juslin, P.N., & Vjästfäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31, 559–621. DOI:10.1017/S0140525X08005293.

Repp, B. (1999). Individual differences in the expressive shaping of a musical phrase: The opening of Chopin’s Etude in E major. In Suk Won Yi (Ed.), Music, Mind, and Science, 239-270.

Watts, S., Buratto, L.G., Brotherhood, E.V., Barnacle, G.E., Schaefer, A. (2014). The neural fate of neutral information in emotion-enhanced memory. Psychophysiology, 51, 673–684. 
DOI: 10.1111/psyp.12211.

Weigand, A., Feeser, M., Gärtner, M., Brandt, E., Fan, Y., Fuge, P., Böker, H., Bajbouj, M., & Grimm, S. (2013). Effects of intranasal oxytocin prior to encoding and retrieval on recognition memory. Psychopharmacology, 227, 321–329. DOI 10.1007/s00213-012-2962-z.

Individual Project Progress Report

0
0

The specific focus of my individual project is to determine whether the known effects of music therapy on dyslexia in children are results of merely improved timing skills or of learning music as a topic area in general.  To determine this, I plan to run an experiment in which dyslexic children will be divided into different treatment groups – timing group, music group, and art group – and improvement after a year of the treatment will be observed.  In the timing group, my goal is to enhance the children’s timing skills without using music.  To do this, I am thinking about using video games or sport skills, such as throwing and catching a ball.  I will try to find sources that detail increased timing skill and perhaps give a specific technique or program that has been proven to enhance temporal skills.  In the music group, children will actively learn music by listening to music, learning rhythms, learning basic notation, and other simple music skills.  The arts group will provide a control and will involve the children learning to use different mediums of art and making art projects.  I also plan to contact a prominent researcher on the topic of language deficiencies and music, Katie Overy, to see if she knows of any other helpful sources or could provide any insights or advice.  My next steps involve deciding on the details of the experiment and locating any more sources that may contribute to my topic.

Group 2 First Project Writeup

0
0

Agreement in Musical Experts Identification of Beat Levels and their Salience

Schroeder, J., Simmons, G.

Yale University, Cognition of Musical Rhythm, Virtual Lab

1. BACKGROUND AND AIMS

1.1  Introduction

This experiment aimed to look at the salience of beat (or pulse) levels, or subdivisions, in certain songs. Salience is a measure of how perceivable each beat level is, and is made of up a number of different variables, including volume, timbre, and pitch. The purpose of studying the number of salient pulse levels, or subdivisions, was to explore whether their variance might affect the perception of a song’s groove. A pulse level is a steady beat in the music, a stream of musical events that happen in equal and predictable intervals, and has also been defined more anecdotally as a beat that you might feel compelled to tap along or move to. However, in many pieces of music, there are several possible pulse levels that one could focus on. We theorized, after reading Janata, Tomic, and Haberman (2011), that having more pulse levels accessible in the music might be connected with a higher groove rating. There are, of course, many different factors that make up the perception of groove; in this study we wanted to isolate this one factor as best as possible to see what the relationship is.

1.2  Previous Research

Our initial inspiration was drawn from the “Sensorimotor Coupling in Music and the Psychology of the Groove” study by Janata, Tomic, and Haberman (2011). Many other studies have investigated the meaning of ‘groove’ and the rhythmic properties related to it, by comparing microtiming deviations (Gouyon, Hornstrom, Madison, Ullen, 2011) or just categorizing the prominent factors “regular-irregular, groove, having swing, and flowing” (Madison, 2006).

Methods included assessing correlations between listeners’ ratings and a number of quantitative descriptors of rhythmic properties for one hundred music examples from five distinct traditional music genres (Gouyon, Hornstrom, Madison, Ullen, 2011) and in terms of differences in ratings across sixty-four music examples taken from commercially available recordings. (Madison, 2006).

Janata et al. explored the urge to move in response to music using phenomenological, behavioral, and computation techniques. Showing that groove is a psychological construct, their methods proved that the “degree of experienced groove is inversely related to experienced difficulty of bimanual sensorimotor coupling under tapping regimes with varying levels of expressive constraint and that high-groove stimuli elicit spontaneous rhythmic movements” (Haberman, Janata, Tomic, 2011).

1.3  Present Research

Does the saliency of beat level pulses affect perceived groove rating?

Our initial proposal was to have a panel of musical experts rate levels in songs for confirmation and a to be able to choose songs with a variety of beat levels before giving subjects songs to rate grooviness of, but because of difficulties in collecting data and inconsistencies between experts’ opinions, we have decided to use on the first part of our initially proposed project.

2. METHOD

1.1  Participants

There have been 5 participants, all students from the Yale School of Music, as well as one professor. The participants were contacted by email, and were not offered any sort of compensation.

1.2  Stimuli

The experiment consisted of a Qualtrics survey, built with the Qualtrics website, and contained fourteen 30 second excerpts of songs of various style and genre, which were supplied by Petr Janata, and had been used in Janata et al. (2008). Each of the fourteen excerpts constituted a trial, and the number of beat levels present in the song, the salience of each of those beat levels, and the primary instrument that contributed to the creation of each beat level were used as variables. Salience was rated on a scale from 0 – 10, and the labelling of instrumentation was left up to the subjects. The tempos were found using the toolbox described in Tomic & Janata (2008), and a few were halved, due to the fact that they were obviously associated with a faster metric levels. One song (Step it Up Joe) was excluded due to a lack of information.

Song Artist Genre Tempo (bpm) Groove Rating
Superstition Stevie Wonder Soul 99 108.7
Yeah! Usher feat. Lil’ John & Ludacris Soul 211 (really 105.5) 89.7
Freedom of the Road Martin Sexton Folk 25 59.7
What a Wonderful World Louis Armstrong Jazz 36 66.4
Beauty of the Sea The Gabe Dixon Band Rock 63 32.1
Thugamar Fein an Samhradh Linn Barry Phillips Folk 33 29.3
The Child is Gone Fiona Apple Rock 195 (really 92.5) 62.3
Mama Cita (Instrumental) Funk Squad Soul 95 101.6
Citi Na GCumman William Coulter & Friends Folk 20 35.2
Summertime Ella Fitzgerald & Louis Armstrong Jazz 99 67.9
Goodies Ciara feat. Petey Pablo Soul 50 92.3
Step it Up Joe Mustard’s Retreat Folk
In the Mood Glenn Miller & His Orchestra Jazz 162 (really 81) 96.9
Squeeze Robert Randolph & The Family Band Rock 58 63.4

 

1.3  Task & Procedure

Participants were asked to complete a survey which presented 14 excerpts of songs in random order, each 30 seconds long. They were then asked to identify the salience of each beat level, up to five, with the first being the slowest, and the last being the fastest. They were instructed to only put down those that they believe existed clearly in the music, not those that they were able to find due to musical training. They were also asked to provide the instrument the most contributed to the creation of each beat level.

Screen Shot 2014-12-03 at 2.16.08 AM

This figure shows the basic setup of each trial. An additional space was provided below in each trial for miscellaneous or explanatory comments.

1.4  Data Collection & Analysis

The data was collected through the Qualtrics website, and then exported into an excel sheet. An analysis was conducted by looking at the experts’ agreement on the number of beat levels in each song, as well as the most salient beat level of those. These measures were then used to compare to the groove ratings and tempos found in Janata et al. (2008).

Final Deadlines & Feedback – Updated!

0
0

Today was our last regular class meeting. I will be available for online and email feedback as well as for individual meetings throughout the Reading period and until our scheduled Final exam period (Wednesday, December 17 @ 9:00 AM). Please allow at least 48hrs for response to email queries. I will review posting on the Virtual Lab periodically, but if you need feedback more urgently, you can always send me a note via email, with a link to the posting you want me to take a look at.

The updated individual project final presentation instructions and sample structured abstracts reports from previous offerings of the course have been posted on the corresponding pages of the Virtual Lab.

Remember that your completed structured abstract for the group project is due on Thursday, December 11 (end of Reading period); he final structured abstract should be posted on your Virtual Lab’s group project page. In addition to the structured abstract, I would like you to submit an offline copy of your Qualtrics survey (e.g., PDFs or screenshots), copies of your original stimuli (Group 1 only; please label each stimuli condition with an informative file name), and a copy of your data files (with data analysis details, if applicable). This will facilitate my review process as well as provide samples for future offerings of this class. Please send the requested materials as a single .zip folder and include a PDF version of the structured abstract report.

Final Presentation Instructions

0
0

Instructions are found below and can be downloaded here.

Goal: Deliver a substantial and effective presentation of your research project to a (mock) panel of judges from a major funding agency on Wednesday, December 17, 9:00 AM (SKL 408). Individual presentations should be no more than 25 minutes, including 5 minutes for questions.

Required format: The presentation should be prepared using PowerPoint (or equivalent software). It is strongly recommended that you use the presenters’ tool to insert notes of what you plan to say. All slides must be emailed to me by December 17, 8:45 AM; make sure to include all the necessary materials (video/audio files). All materials should be sent in a single folder; it is also recommended to save the single folder on a thumb drive as back up.

Grading: This assignment will count for 50 points out of 100. The grade will be based on adherence to instructions (5), quality of delivery (20), quality of contents (20), and timely submission (5 points); see the attached evaluation sheet. Because this presentation counts as the final exam, each student is required to attend the entire duration of the presentations; no make-up will be given unless supported by a Dean’s permission.

Although presentations will vary in style, they should include all the components listed below; each component may be represented by one or a few slides. Note that the amount of time spent on each component will vary depending on the specific nature of the project and the state of research on this topic.

  1. Larger context
  • Situate your project within some everyday life element.
  1. Research question
  • State the question as clearly and succinctly as possible; include all necessary definitions.
  1. So what?
  • Why should the audience care about this particular question? Are there larger implications?
  1. Background research/Previous findings
  • What is the surrent state of knowledge/research on this topic? What are the specific findings directly relevant to your project?
  • Divide previous work and findings done into 2-3 categories based on different aspects of the question or research methods used.

 

  1. Hypothesis statement/Specific questions
  • Re-state your question in terms of variables, measures, and possible outcomes (i.e., if behavior A is observed, it will suggest a, and if behavior B is observed, it will suggest b).
  • If your research does not involve a behavioral experiment but some other form of empirical method, you can still form a hypothesis, but it might be stated in a (somewhat) different format. Alternatively, you may present a set of specific questions.
  1. Experimental design/Research method
  • Descirne the proposed experiment as clearly and concretely as possible, including source materials (you may include a sample) & methods (task, procedure, participants, variables, measures, data analysis).
  • Include some kind of figure that clarifies the experimental design in some way; this can help clarify the procedure and might save a lot of time.
  • Identify at least one musical example you plan to use either for analysis or as source materials for your proposed experimental design, and make it part of your presentation (i.e., play a recording, if available, or present it as an example).
  • NOTE: If your research does not involve a behavioral experiment, this section should be adapted to your methodology. For example, if your work involves corpus analysis (i.e., the systematic analysis of a given characteristic in a representative sample from a particular body of works), you should provide a description of the method and a sample analysis. If the research on your topic is still in its infancy, say how you plan to advance the research, and be as specific as possible.
  1. Concluding remarks/Discussion
  • Identify possible applications of your findings and/or important questions/issues on which your research is likely to shed some light. NOTE: This section is a way to re-visit the “So what?” question you initially adressed, but in the light of your proposal.

8. References

  • You final slide should include a list of all references; make sure to use APA style throughout; you may use the references section of the structured abstract template as an example.

Sample Structured Abstract Reports

0
0

Here are a few sample structured abstracts from students group projects in previous semesters:

– Acevedo, Lettie, Parnes, & Schartmann (2013), Effect of Tempo on Perceived Emotion of Musical Excerpts

– Broshy, Latterner, & Sherwin (2013), Interaction Between Melodic Pitch Content and Rhythmic Perception

– Davis, Fox, & Roth (2013), Effects of Rhythmic Consistency on Perceived Speech Effectiveness

– De Freitas, Jameson, & Strebendt (2013), Influence of Rhythmic Tempo on Sustained Entrainment to the Beat

– Guerra, Hosch, & Selinsky (2013), Tapping to Uneven Beats

 

 

 

Agreement in Musical Experts on the Identification of Beat Levels and their Salience

0
0

Agreement in Musical Experts on the Identification of Beat Levels and their Salience

Schroeder, J., Simmons, G.

Yale University, Cognition of Musical Rhythm, Virtual Lab

1. BACKGROUND AND AIMS

1.1  Introduction

This experiment aimed to look at the salience of beat (or pulse) levels, or subdivisions, in certain songs. Salience is a measure of how perceivable each beat level is, and is made of up a number of different variables, including volume, timbre, and pitch. The purpose of studying the number of salient pulse levels, or subdivisions, was to explore whether their variance might affect the perception of a song’s groove. A pulse level is a steady beat in the music, a stream of musical events that happen in equal and predictable intervals, and has also been defined more anecdotally as a beat that you might feel compelled to tap along or move to. However, in many pieces of music, there are several possible pulse levels that one could focus on. We theorized, after reading Janata, Tomic, and Haberman (2011), that having more pulse levels accessible in the music might be connected with a higher groove rating. There are, of course, many different factors that make up the perception of groove; in this study we wanted to isolate this one factor as best as possible to see what the relationship is. [Genevieve]

1.2  Previous Research

Our initial inspiration was drawn from the “Sensorimotor Coupling in Music and the Psychology of the Groove” study by Janata, Tomic, and Haberman (2011). Many other studies have investigated the meaning of ‘groove’ and the rhythmic properties related to it, by comparing microtiming deviations (Gouyon, Hornstrom, Madison, Ullen, 2011) or just categorizing the prominent factors “regular-irregular, groove, having swing, and flowing” (Madison, 2006).

Methods included assessing correlations between listeners’ ratings and a number of quantitative descriptors of rhythmic properties for one hundred music examples from five distinct traditional music genres (Gouyon, Hornstrom, Madison, Ullen, 2011) and in terms of differences in ratings across sixty-four music examples taken from commercially available recordings (Madison, 2006).

Janata et al. explored the urge to move in response to music using phenomenological, behavioral, and computation techniques. Assuming that groove is a psychological construct, they posited that the “degree of experienced groove is inversely related to experienced difficulty of bimanual sensorimotor coupling under tapping regimes with varying levels of expressive constraint and that high-groove stimuli elicit spontaneous rhythmic movements” (Haberman, Janata, Tomic, 2011). [Genevieve]

1.3  Present Research

The questions we set out to attempt to answer was if the saliency of beat level pulses affected perceived groove rating of a set of songs with already-established groove ratings from the 2011 study of Haberman, Janata, and Tomic.

Our initial proposal was to have a panel of musical experts rate levels of beat levels in songs for confirmation and then have the ability to choose songs with a variety of beat levels before giving subsequent subjects songs to rate grooviness of, but because of difficulties in collecting data and inconsistencies between experts’ opinions, we have decided to use only the first part of our initially proposed project. [Genevieve]

2. METHOD

1.1  Participants

There were 5 participants, all students from the Yale School of Music, as well as one professor. The participants were contacted by email, and were not offered any sort of compensation. [Jordan]

1.2  Stimuli

The experiment consisted of a Qualtrics survey, built with the Qualtrics website, and contained fourteen 30 second excerpts of songs of various style and genre, which were supplied by Petr Janata, and had been used in Janata et al. (2011). Each of the fourteen excerpts constituted a trial, and the number of beat levels present in the song, the salience of each of those beat levels, and the primary instrument that contributed to the creation of each beat level were used as variables. Salience was rated on a scale from 0 – 10, and the labelling of instrumentation was left up to the subjects. The tempos shown below were found by Stefan Tomic and Petr Janata using the method described in Tomic & Janata (2008), and a few were halved, due to the fact that they were obviously associated with a faster metric levels. One song (Step it Up Joe) was excluded in later analyses due to a lack of information provided for us by Janata et al.

Song Artist Genre Tempo (bpm) Groove Rating
Superstition Stevie Wonder Soul 99 108.7
Yeah! Usher feat. Lil’ John & Ludacris Soul 211 89.7
Freedom of the Road Martin Sexton Folk 25 59.7
What a Wonderful World Louis Armstrong Jazz 36 66.4
Beauty of the Sea The Gabe Dixon Band Rock 63 32.1
Thugamar Fein an Samhradh Linn Barry Phillips Folk 33 29.3
The Child is Gone Fiona Apple Rock 195 62.3
Mama Cita (Instrumental) Funk Squad Soul 95 101.6
Citi Na GCumman William Coulter & Friends Folk 20 35.2
Summertime Ella Fitzgerald & Louis Armstrong Jazz 99 67.9
Goodies Ciara feat. Petey Pablo Soul 50 92.3
Step it Up Joe Mustard’s Retreat Folk X X
In the Mood Glenn Miller & His Orchestra Jazz 162 96.9
Squeeze Robert Randolph & The Family Band Rock 58 63.4

Figure 1: This figure details the information about each song as collected and found by Janata et al. (2012). [Jordan]

1.3  Task & Procedure

Participants were asked to complete a survey which presented 14 excerpts of songs each 30 seconds long in a random order. Participants were then asked to identify the salience of each beat level, on a scale of one to five with the first being the slowest, and the last being the fastest. They were instructed to only put record beat levels that they believe existed clearly in the music, not subdivided and less-natural beat levels that they were able to find due to musical training. They were also asked to provide the instrument that contributed the most to the creation of each beat level.

Figure 2: This figure shows the basic setup of each trial. An additional space was provided below in each trial for miscellaneous or explanatory comments. [Jordan]

1.4  Data Collection & Analysis

The data was collected through the Qualtrics website, and then exported into an excel sheet. An analysis was conducted by looking at the descriptive statistics on the experts’ agreement on the number of beat levels in each song, as well as the most salient beat level of those. Tempos of the most salient beats were found using an online tap metronome. This was conducted by the authors, and though there was a certain amount of subjectivity involved, the experts’ provided information concerning the instrumentation of each level, as well as its placement on the scale of Slowest Beat Level to Fastest Beat Level was carefully consulted to make decisions regarding the tempo.  These measures were then used to compare to the groove ratings and tempos found in Janata et al. (2011). [Jordan]

3. RESULTS

1.1 Population Sample

Our population sample consisted of four (4) Yale School of Music students, and one (1) professor and researcher of music. By these criteria, we judged them to be musical “experts”, defined as having many years of experience and a solid basis of theoretical and practical application of music. We believe our results allow us to generalize to the population of people who have this foundation of musical knowledge, but also to comment on the general population as a whole. No further descriptive statistics such as age, gender, etc. were obtained as their study was only intended to be exploratory at the outset of this experiment. [Jordan]

1.2

The # of Beat Levels

Expert # 1 Expert #2 Expert #3 Expert #4 Expert #5
Superstition 3 2 4 3 3
Yeah! 2 4 3 3 5
Freedom of the Road 2 4 4 3 4
What a Wonderful World 2 3 5 3 4
Beauty of the Sea 3 4 2 2 4
Thugamar Fein an Samhradh Linn 2 3 2 2 4
The Child is Gone 2 3 4 3 5
Mama Cita (Instrumental) 2 3 2 3 4
Citi Na GCumman 2 2 3 2 5
Summertime 2 2 4 2 5
Goodies 2 3 3 3 2
Step it Up Joe 2 3 3 2 4
In the Mood 3 3 3 3 3
Squeeze 3 3 4 3 4

Figure 3: Table of the numbers of beat levels assigned to each song by each expert.

As the table shows, there were a wide variety of beat levels identified by each individual expert. Different experts had different trends of what number of beat that they consistently identified, such as Expert #1 only alternating between identifying 2-3 beat levels per song. Expert #4 only identified 2-3 beat levels per song as well, and Expert #2 only had three songs with 4 beat levels identified. Experts #3 and #5 both had more variety between beat levels identified. Only Expert #3 and #5 identified any song as having 5 beat levels, although this was not consistent across songs and Expert #3 only identified “What a Wonderful World” as having 5 beat levels. The only song consistent in number of beat levels between all five experts was “In The Mood”, but the experts still disagreed on the particular order of instrumentation as organized by tempo in distinguishing each beat level.

Descriptive Statistics of the # of Beat Levels by Song

Range Mean Standard Deviation
Superstition 2 to 4 3 0.707106781
Yeah! 2 to 5 3.6 1.140175425
Freedom of the Road 2 to 4 3.4 0.894427191
What a Wonderful World 2 to 5 3.4 1.140175425
Beauty of the Sea 2 to 4 3 1
Thugamar Fein an Samhradh Linn 2 to 4 2.6 0.894427191
The Child is Gone 2 to 5 3.4 1.140175425
Mama Cita (Instrumental) 2 to 4 2.8 0.836660027
Citi Na GCumman 2 to 5 2.8 1.303840481
Summertime 2 to 5 3 1.414213562
Goodies 2 to 3 2.6 0.547722558
Step it Up Joe 2 to 4 2.8 0.836660027
In the Mood 3 3 0
Squeeze 3 to 4 3.4 0.547722558

Figure 4: Descriptive statistics based on the number of beat levels the experts assigned each song, organized by song.

The majority of the ranges seem to be centered around 3, with the mean of all the songs falling between 2.6 – 3.6.  The mean for all the beat levels perceived by the experts, across all songs, was M = 3.04285714, SD = .32749465. No song was identified as having less than 2 beat levels.

Figure 5: A histogram of the frequencies of each mean # of beat levels of the songs

Figure 6: This bar graph displays the means of the number of beat levels given by the experts for each song.

The histogram in Figure 5 appears to be relatively normal, except for gap in the center for a mean number of beats of 3.2, which tells us that statistical analyses are valid, but also that our results may be more random than we expected, as a normal distribution is the distribution of a random sampling. A mean number of beat levels of 3 and 3.4 beat levels are the most frequent, occurring four times each – 3.4 falls just outside one Standard Deviation from the overall mean, so it’s odd that there are 4 songs with that mean, but we had only 5 participants, which may explain some of the randomness of the data. “Yeah!” falls furthest from the overall mean, but is still less than 2 Standard Deviations away.

Descriptive Statistics of the Number of Beat Levels by Expert

Range Mean Number of Beat Levels Perceived Standard Deviation
Expert #1 2 to 3 2.28571429 0.468807231
Expert #2 2 to 4 3 0.67936622
Expert #3 2 to 5 3.28571429 0.913873533
Expert #4 2 to 3 2.64285714 0.497245158
Expert #5 2 to 5 4 0.877058019

Figure 7: This table shows descriptive statistics of the number of perceived beat levels, but this time is organized by expert, rather than song.

Figure 8: This figure shows how many beat levels each expert perceived in each song.

As we can see from these two figures, it is clear that different experts had different concepts of beat levels and used different strategies to find the beat levels in a given song. After all, for the same 14 clips of songs, one expert (Expert #5) perceived mostly 4 and 5 beat levels in the songs, while two others (Expert #1 & #4) perceived no more than 3 beat levels in all of the clips. We can also see that there was wide disagreement between the experts on almost every song – the only song that the experts unanimously agreed on was “In the Mood”, which they perceived as having 3 beat levels. It must also be taken into account that the mean and standard deviation across all the reported numbers of beat levels is M = 3.04285714, SD = .32749465. We can see that Expert #2 is closest to this mean. For the all the results from each expert in table form, please check the appendix.

1.3  Expert’s Perceived Instrumentation and Salience Ratings

Tempos of Each Expert’s Most Salient Beat Levels

Person 1 Person 2 Person 3 Person 4 Person 5
Superstition Bass Drum + Snare: 50
Clavinett + High Hat: 99
Bass: 100 Vocals: 100
Guitars: 200
Voice: 100 Bass, Kick, + Snare: 100
Yeah! High Hat  + Synth: 210 Percussive Click: 52 Vocals: 210 Bass Drum + Clap: 105 Kick: 52
Cymbals, Voice + Synth: 210
Freedom of the Road Bass Drum + Snare: 49 Bass: 25
Drums: 146
Harmony Piano: 25
Drums: 146
Vocals: 146
Bass Drum + Snare Drum: 49 Kick + Snare: 49
What a Wonderful World Bass Drum + Snare: 72 Drums: 72 Vocals: 72 Voice: 72 Kick, Snare, Horns, + Strings: 72
Beauty of the Sea Keyboard: 60
Keyboard: 120
Synth: 120 Saxophones: 60 Strings (Synthesizer): 60 Organ: 60
Thugamar Fein an Samhradh Linn Downbeats Every 6: 17 Cello: 34
Bagpipes: 34
Wind Cello: 34 Rolled Chords: 17 Cello: 34
The Child is Gone Piano: 65
Drums: 195
Piano: 65 Vocals: 65
Drums + Keyboard: 195
Piano Chords: 65 Drums, Bass, + Piano: 195
Mama Cita (Instrumental) Bass Drum + Bass:  95
High Hat + Percussion: 190
Drum: 95 Keyboard: 95
Percussion: 190
Bass Drums:  95 Kick + Bass: 95
Cabasa: 190
Citi Na GCumman Guitar’s Bass Notes: 38 Guitar Whole Note: 38
Guitar Melody: 114
Guitar Arpeggios: 114 Strum: 38
Summertime Strings: 34
Trumpet: 70
Drums: 70 Trumpet: 70 Drum Pattern: 70 Strings: 34
Bass: 70
Goodies Claps: 51 Drums: 101 Vocals: 101 Snare Drum:  101 Vocals: 204
In the Mood Bass: 162 High Hat: 162 Saxes + Trumpets: 162 Double Bass: 162 Drum + Bass: 162
Squeeze Bass Guitar + Bass Drum + Snare: 120
High Hat: 244
Drum Kit: 244 Bass: 120
Drums: 244
Guitar: 475
Bass Guitar: 120 Bass: 120
Rhythm + Solo: 240

Figure 9: A table showing the instrumentation and tempo of the beat level (or beat levels, as is the case with a few songs and experts) experts perceived as the most salient. Tempos are in BPM, and when an expert rated a song as having two or more equally salient “most salient” beat levels, all have been included.

Figure 10: This graph depicts the tempo found by the authors of the “Most Salient Beat Level” identified by each expert, compared to the tempos found by Janata et al. (2011). In order to graph the information, only one “Most Salient Beat Level” could be shown, though as the table above shows, some experts identified more than one “Most Salient Beat Level” – in these cases we have arbitrarily chosen to graph the slowest one.

As we can see from Figure 9 & 10, many of the experts were able to agree on a most salient beat level – all of the experts and Janata agreed unanimously on the tempo of the most salient beat level of “In the Mood”, placing it at around 162 BPM, as well as on the most salient beat level of “Mama Cita (Instrumental)”. However, as is shown in the table, in the case of “Mama Cita (Instrumental)”, three of the experts (Experts #1, #3, and #5) also identified a second most salient beat level at double the first tempo. This is an example of how different beat levels seemed to follow simple ratios such as halves and thirds. The keyboard, drum, and bass in “Mama Cita” (Instrumental) were identified by all five experts as being the most salient beat level at the tempo we further analyzed to be 95 BPM. Additionally, those three experts all identified a second, equally most salient beat level, at 190BPM (double time of the slower, more salient beat level) for high hat, percussion, and cabasa. This can also be seen in “Superstition,” where the most salient beat levels identified were even at the four different tempos of 50, 99, 100, and 200 BPM, all easily subdivided into each other.

There was some disagreement though; all of the experts deemed either percussive sounds or vocals as the most salient beat levels for “Goodies.” However, each expert identified each at a different tempo, based on our interpretations of their rankings of each salient instrument. Expert #1’s ‘claps’ are half the tempo of Experts #2 and #4 percussion sounds, which are actually the same tempo of Expert #3’s vocals because they are listed before percussion in the five levels. Expert #5 has vocals listed after percussion beat levels though, leading us to interpret that theirs was the faster subdivision of vocals at the tempo of 204 BPM.

Experts tended to differ particularly in identifying in the instrumentation of specific beat levels, or assigned different instruments different tempos, but for the most part always had at least one level that could be found in the results for other experts as well. For example, in “What a Wonderful World,” the most salient beat level, although identified as some combination of bass drum, strings, horn, and voice, always was within a consistent tempo of 72 BPM. Many times differences in instrumentation were only due to strategies in identifying specific names of instruments, i.e. labelling a beat level as ‘bass drum and snare’ versus ‘kick and snare.’

In conclusion, we found a wide variance in our data – sometimes the experts agreed, even unanimously in the case of “In the Mood”, and sometimes they disagreed not only on the most salient beat levels, but on the instrumentation and tempos of those beat levels. However, we generally found that the experts identified tempos that fell into simple beat ratios, and if not the same as Janata’s ratings, were at least multiples or factors of them.

1.4  Analysis & Figure 3

Figure 11: This graph compares our mean number of beat levels identified to the groove ratings of the same song clips from Janata et al. (2011)

As we can see from this graph, though there was no statistical analysis possible (due to the lack of a “known value” for the beat levels) we can say that we failed to reject the null hypothesis that the number of beat levels in a song is not correlated with the grooviness of the song. In other words, it is not clear at all whether having more beat levels in a song might contribute to hearing the song as having more groove. In our findings, the song with the highest groove rating ended up having roughly an average number of beat levels, while songs that received a very low groove rating had the same average number or even higher numbers of beat levels. “Citi Na CGumman,” with one of the lowest groove ratings of 35.2, had up to five levels of beat levels identified.

4. CONCLUSIONS

It’s clear from our results that beat levels are a far more subjective and varied measure than we anticipated. Our assumption was that school of music students and professors, with their expertise and experience with music, would be more likely to identify similar numbers of beat levels for each song.  However, as shown by the results, only one song was unanimously agreed upon, “In the Mood” – the rest showed a wide range of variety, with some experts identifying only 2 beat levels for some songs and others identifying up to 5 beat levels for the same songs. As noted in the Present Research section of this structured abstract, this unexpected variability made it difficult and of questionable worth to continue on with our original experiment.  However, with the data we gathered from the experts, we were still able to conduct several analyses exploring the variability between experts we found more thoroughly, and comparing our results with those of Janata et. al (2011).

Through these analyses we discovered that each expert often had their own quirks and trends – this makes sense when you consider that each expert likely had their own specific strategy for discerning the number of beat levels in each clip. Across the experts, we also noticed a habit of recognizing a combination of instruments as creating beat levels not articulated by one instrument alone.  For example, in “Yeah!”, Expert #4 identified the instrumentation of the most salient beat level as “Bass Drum + Clap”.  What we can hear when listening to the clip is that the Bass Drum and the Clap both move at slower tempos individually [which Expert #2 separated, as is shown in Figure 7, (we attributed Expert #2’s instrumentation of a “percussive click” as the Clap heard by Expert #4) and which we found to have tempo roughly half that of the two instruments combined], but when heard together, and perceived as one beat level, they combine to form a faster metric structure, the downbeats of each instrument falling on beats of a faster tempo.

Experts, in addition to disagreeing about the number of beat levels present in the songs, also disagreed about the most salient beat, and the instrumentation of it. As noted above, this was sometimes caused by some experts’ combinations of instruments, but sometimes different experts simply seemed to be listening for different auditory stimuli in order to obtain the number of beat levels.  When comparing Figure 6 with Figure 7, we can see that Expert #1 seemed to focus more on the different percussion present in a song to find salient beat levels, while Expert #2 seemed to look more consistently to the strings in order to find the salient beat levels.  From these results we can infer that these two experts were likely using different strategies, but also may have different concentrations or focuses within music, one being more attuned to percussion and the other to string instruments.

In addition to the variability found between experts, we ran into the problem that all of the song’s averaged beat levels fell between 2.6 and 3.6.  Despite the variability of reported beat levels, sometimes ranging from 2 beat levels all the way to 5 beat levels, the averages showed little variation. It may be that this difference is still significant, but it debunks an implicit assumption we possessed that there would be a wider variety of averages, and that it would be more clearly in different categories. This also seems to imply that most songs would fall within this range, as we sampled a variety of genres and tempos of music.

Though were not able to correlate the number of salient beat levels with the perceived groove of a song, we do not believe that the two are necessarily unrelated. There were several implicit assumptions that became clear only after analyzing the results, but that might have accounted for the variability as well as the findings we drew from those results.  For example, a better method, and one that might have produced more consistent results, would have been for the authors to analyze the songs beforehand and produce a list of instruments contained in each song, which the participants could have then put into order from slowest beat level to fastest.  This practice would have at least eliminated some of the variability within the instrumentation, as many experts put down varying names for the same beat level.  The way we chose to measure salience also had its flaws; 0 – 10, while seemingly a simplistic subjective measure, was found difficult to attach to a measure of salience, as there is no unit for salience, and no value that immediately corresponds across the two measures. (For example, from 0 – 10, what does a salience of 5 mean?  The beat level is halfway salient?). As stated above, a question asking participants to order the salient beat levels, and using that as a comparative measure may have been a more effective method. Lastly, our method of finding the tempo of each beat level was flawed.  Though the authors were as accurate as possible, following the results given by the experts as closely as possible, it would have been more accurate to have the participants give the tempo for each beat level they were identifying themselves, by use of a within survey tap metronome.  Using this method would have eliminated any possible bias (no matter how much the authors tried to uphold the integrity of the results, it is possible, though unlikely, that we misinterpreted some of the information when finding the tempos), and made the results much clearer.  In another direction, a future study may have each expert rate the song’s grooviness in addition to finding the beat levels – perhaps the connection is not between the structural beat levels within the song and the song’s groove, but between a person’s personal perceived number of beat levels (however many beat levels that person is hearing/believes to exist in the song) and that person’s personal assessment of the song’s groove.  Future studies may also want to ask the experts specific questions about their training, in order to better understand their backgrounds and the strategies they may be employing in the task.  By implementing these measures, we believe that a more successful experiment, more sound and distributed to a larger number and wider variety of people may be conducted in the future to better explore the relationship between the number of beat levels and groove.

 

ACKNOWLEDGEMENTS

The authors would like to thank our musical experts for taking the time to complete our survey, as well as Professor Eve Poudrier for advising us throughout the process. We would also like to extend our gratitude to Petr Janata, who was the leading researcher for the original study our experiment was based off of, and who very generously provided us with the stimuli he and his team used and the data they collected.

REFERENCES

Janata, P., Tomic, S. T., & Haberman, J. M. (2011). Sensorimotor Coupling in Music and the Psychology of the Groove. Journal of Experimental Psychology, Vol. 141, No. 1, 54–75

 

Madison, G. (2006). Experiencing Groove Induced by Music: Consistency and Phenomenology. Music Perception: An Interdisciplinary Journal Vol. 24, No. 2 (pp. 201-208)

Madison, G., Gouyon, F., Ullen, F., Hornstrom, K. (2011). Modeling the tendency for music to induce movement in humans: First correlations with low-level audio descriptors across music genres. Journal of Experimental Psychology: Human Perception and Performance, Vol 37(5), 1578-1594.

Tomic, S. T., & Janata, P. (2008). Beyond the Beat: Modeling Metric Structure in Music and Performance. The Journal of the Acoustical Society of America 124.6: 4024. Web.

APPENDIX

Expert #1

1st Beat Level and Salience 2nd Beat Level and Salience 3rd Beat Level and Salience 4th Beat Level and Salience 5th Beat Level and Salience
Superstition Bass Drum + Snare: 10 Clavinet + High Hat: 10 Horns + Clavinet + High Hat: 9
Yeah! Bass Drops: 8 High Hat + Synth: 10
Freedom of the Road Bass Drum + Snare: 10 High Hat + Guitar: 8
What a Wonderful World Bass Drum + Snare: 10 Guitar Arpeggios: 9
Beauty of the Sea Everything else: 8 Keyboard: 10 Keyboard: 10
Thugamar Fein an Samhradh Linn Stuff On Downbeats Every 6: 10 Cello: 3 Cello: 6
The Child is Gone Piano: 10 Drums: 10
Mama Cita (Instrumental) Bass Drum + Bass: 10 High Hat + Percussion: 10
Citi Na GCumman 8 10
Summertime Strings: 10 Trumpet: 10
Goodies Claps: 10 Vocals + Synth: 10
In the Mood Open High Hat: 7 Bass: 10 High Hat + Horns: 9
Squeeze Bass Guitar + Bass Drum + Snare: 10 Bass Guitar: 10 High Hat: 10

Figure 12

As stated above, Expert #1 never perceived more than 3 beat levels in a given song, and tended to rate the salience of those beat levels relatively highly, even at times rating two or more beat levels as “The Most Salient”. Expert #1 also tended to focus on the percussion parts in the piece in order to extract the salient beat levels although they tended to deconstruct the drum kit, for example differentiating between high hat, and bass drum and snare.

Expert #2

1st Beat Level and Salience 2nd Beat Level and Salience 3rd Beat Level and Salience 4th Beat Level and Salience 5th Beat Level and Salience
Superstition Drums: 6 Bass: 8
Yeah! Percussive Click: 9 Low Drum: 7 Synth: 8 Strings: 8
Freedom of the Road Bass: 9 Guitar: 6 Piano: 8 Drums: 9
What a Wonderful World Strings: 7 Guitar: 8 Drums: 6
Beauty of the Sea Saxes: 5 Synth: 8 Synth: 4 Synth: 3
Thugamar Fein an Samhradh Linn Guitar: 5 Cello: 8 Bagpipes: 8
The Child is Gone Strings: 6 Piano: 9 High Hat: 8
Mama Cita (Instrumental) Piano: 5 Drums: 9 Percussion: 6
Citi Na GCumman Guitar’s Bass Notes: 8 Guitar’s Midrange Notes: 6
Summertime Strings: 6 Drums: 9
Goodies Sign Wave Noise: 7 Drums: 8 Vocals: 5
In the Mood High Hat: 8 Bass: 7 Saxes From 10-12 sec: 8
Squeeze Rhythm Guitar: 6 Drum Kit: 8 Electric Guitar: 7

Figure 13

Expert #2, in comparison to Expert #1, seemed to tune in more to string instruments when searching out the salient beat levels within a piece. Unlike the other Experts, Expert #2 refrained from combining several instruments into the creation of beat levels, citing only one instrument for each beat level.

Expert #3

1st Beat Level and Salience 2nd Beat Level and Salience 3rd Beat Level and Salience 4th Beat Level and Salience 5th Beat Level and Salience
Superstition Drums: 8 Trumpet: 7 Vocals: 10 Guitars: 10
Yeah! Synth: 9 Vocals: 9 Percussion: 10
Freedom of the Road Harmony Piano: 10 Guitar: 9 Drums: 10 Vocals: 10
What a Wonderful World Bass: 6 Trombone Fill: 7 Strings: 9 Vocals: 10 Guitar: 8
Beauty of the Sea Saxophones: 10 Synth: 9
Thugamar Fein an Samhradh Linn Strummed: 9 Wind Cello: 10
The Child is Gone Bass: 5 Violin: 8 Vocals: 10 Drums + Keyboard: 10
Mama Cita (Instrumental) Keyboard: 10 Percussion: 10
Citi Na GCumman Guitar Whole Note: 10 Guitar Arpeggiation: 9 Guitar Melody: 10
Summertime Strings Vib: 9 Bass: 6 Bass Answer: 8 Trumpet: 10
Goodies Bass: 8 Vocals: 10 Percussion: 9
In the Mood Bass: 5 Drums: 7 Saxes + Trumpets: 10
Squeeze Keyboards: 6 Bass: 10 Drums: 10 Guitar: 10

Figure 14

Expert #3 often neglected to identify an instrument for a first and even second beat level, focusing on faster and more salient instruments through which to identify each level. The Bass was most frequently associated with the slowest identified beat level. Like the rest of the experts, the instrumentation for “In The Mood” consists of some combination of percussion, horns, and bass.

Expert #4

1st Beat Level and Salience 2nd Beat Level and Salience 3rd Beat Level and Salience 4th Beat Level and Salience 5th Beat Level and Salience
Superstition Bass Drum: 3 Drum Set + Voice: 10 Guitar Combo: 8
Yeah! Voice (Accented Syllables): 3 Bass Drum + Clap: 9 High Hat: 6
Freedom of the Road Bass Guitar + Bass Drum: 7 Bass Drum + Snare Drum: 10 High Hat + Guitar: 5
What a Wonderful World Bass Drum: 3 Drum Set + Voice: 10 Guitar Arpeggio: 7
Beauty of the Sea Strings (Synthesizer): 6 6
Thugamar Fein an Samhradh Linn Rolled Chords: 8 Bagpipes: 6
The Child is Gone Drum: 2 Piano Chords: 10 High Hat: 6
Mama Cita (Instrumental) Chord Change: 1 Bass Drums: 10 High Pitched Instrument: 3
Citi Na GCumman Chord Changes: 4 Guitar Arpeggios: 8
Summertime Strings + Bass Chords: 5 Drum Pattern: 8
Goodies Bass Drum (Synthesizer): 2 Snare Drum: 8 Vocals: 6
In the Mood Brass Pattern: 4 High Hat: 8 Double Bass: 10
Squeeze Drums: 4 Bass Guitar: 10 Solo Guitar: 10

Figure 15

Expert #4, like Expert #1, never perceived more than 3 beat levels within a given piece, though s/he had a tendency to rate the salience of some beat levels as much lower, perceiving some beat levels only as just above having no salience whatsoever. Expert #4 noted in the comments section of the survey (the only expert that did so) that some instruments that make up a salient beat level do not come in immediately with the beginning of each song clip, and that “the fastest level seems to be a combination of faster attacks rather than being defined by a main instrument,” which is an observation that analyzing all of the experts together can show as a difficult part of the survey to be clear upon.

Expert #5

1st Beat Level and Salience 2nd Beat Level and Salience 3rd Beat Level and Salience 4th Beat Level and Salience 5th Beat Level and Salience
Superstition Bass, Kick, + Snare: 9 Clav, Brass Line, Voice: 8 Clav, Brass Line, Voice: 7
Yeah! Kick: 7 Whistle: 8 Kick: 9 Cymbals, Voice, + Synth: 9 Ring + Voice: 5
Freedom of the Road Bass + Slide Guitar: 9 Kick + Snare: 10 Guitar, High-Hat, + Voice: 9 Hi-Hat, Voice, + Bass: 7
What a Wonderful World Harmony: 7 Kick, Snare, Horns, + Strings: 10 Guitar + Hi-Hat: 9 Voice: 5
Beauty of the Sea Phrasing: 2 Implication of Organ: 9 and 10 Low Hum: 9 Organ: 9
Thugamar Fein an Samhradh Linn Plucked String: 5 Cello: 8 Pipes: 4 Drone: 2 Drone: 0
The Child is Gone Phrasing: 2 Electric Guitar: 7 Drums, Bass, + Piano: 10 Strings: 2 Cymbal + Voice: 4
Mama Cita (Instrumental) Keyboard: 7 Kick + Bass: 9 Melody + Bass: 7 Cabasa: 9
Citi Na GCumman Phrasing Emphasis: 2 Bass Register: 7 Strum: 10 Chordal Rhythm: 1 Melody: 9
Summertime Harmony: 4 Strings: 9 Bass: 9 Trombone: 7 Brushes: 6
Goodies Synth Whistle: 0 Synth Whistle: 0 Kick + Snare: 8 Vocals: 9
In the Mood Sax Line: 8 Drums + Bass: 9 Sax Solo: 8
Squeeze Phrasing: 6 Bass: 8 Drums: 9 Rhythm + Solo: 8

Figure 16: Figures 12 – 16 show each experts’ perceived instrumentation and their salience within each song, in order of tempo slowest to fastest.

Expert #5 perceived the widest range of beat levels, and often perceived significantly more beat levels than the other experts, hearing 4 or 5 beat levels much more frequently. However, they also had much lower levels that other experts present, even listing in “Goodies” that the first two beat levels were composed of a synth whistle with a salience of “0”, and can be assumed to be stating the presence of the synth as auditory stimulation without contributing to the tempo.

 


Call for Participation: NEMCOG @ Wesleyan!

0
0

CALL FOR PARTICIPATION
SATURDAY, APRIL 4, 2015 – WESLEYAN UNIVERSITY

The next semiannual meeting of the Northeast Music Cognition Group (NEMCOG) will take place at Wesleyan University on Saturday April 4, 2015. The goal of NEMCOG is to facilitate interaction among researchers at institutions along the Northeast Corridor who are interested in the area of music cognition, to discuss research in the field, and to identify topics of joint interest and areas for potential collaboration.

To register for the event, please RSVP to the NEMCOG organizers at nemcog1@gmail.com by Sunday, March 15, 2015. Continental breakfast and a catered lunch will be provided.

The schedule for the workshop is as follows:

8:00 – 9:00     Registration and breakfast
9:00 – 10:00    Short talks: Session I
10:00 – 10:30   Coffee break
10:30 – 11:30   Short talks: Session II
11:30 – 12:00   Coffee break
12:00 – 1:300   Short talks: Session III
1:00 – 2:00     Lunch
2:00 – 3:00     Keynote
3:00 – 3:30     Coffee break
3:30 – 4:30     Concert
4:30 – 5:00     Panel discussion with performers and scientists
5:00 – 6:00     Open house
6:00 onwards    Informal gathering (on your own – list of restaurants will be provided)

We invite submissions of very short (8-minute) presentations of research for an interdisciplinary audience. This year, we especially welcome abstract submissions in honor of David Wessel (1942 – 2014), Professor of Music at UC Berkeley, founding director of Berkeley’s Center for New Music and Audio Technologies, and past president of the Society for Music Perception and Cognition. We hope to make slots for eight-minute talks available to all, but in an effort to make room for speakers who have not spoken at NEMCOG previously, we may have to turn down some requests for slots at this meeting. If you would like to do a presentation, please indicate so in your RSVP with a tentative title and a short abstract or bio. All presented abstracts and bios will be shared with our attendees and posted on our web site.

If you are unable to come to this meeting but would like to remain as an interested non-attendee, you could be kept abreast of the group’s activities through continued inclusion on our e-mail list. If this message was forwarded to you by a colleague or through another e-mail list, and you would like to receive our regular announcements, please sign up for our mailing lists athttp://nemcog.smusic.nyu.edu/subscribe.html

Please circulate this invitation widely to anybody that you think might be interested and able to attend either this meeting or future meetings elsewhere in the Northeast Corridor region.

Organizing Committee
Psyche Loui, Assistant Professor of Psychology and Neuroscience and Behavior, Wesleyan University
Mark Slobin, Winslow-Kaplan Professor of Music, Wesleyan University
Ron Kuivila, University Professor of Music, Wesleyan University
Gloster Aaron, Associate Professor of Biology and Neuroscience and Behavior, Wesleyan University
Ed Large, Professor of Psychology, University of Connecticut

Executive Committee
Morwaread Farbood, NYU
Psyche Loui, Wesleyan University
Panayotis Mavromatis, NYU
Ève Poudrier, Yale
Ian Quinn, Yale

Measuring Musical Engagement

0
0

You are invited to participate in a study on the different ways how we listen to and appreciate music. This study is conducted by Thijs Vroegh, doctorate at the Music Department of the Max Plank Institute for Empirical Aesthetics in Frankfurt, Germany.

To participate, visit this page: http://ww2.unipark.de/uc/AEAMS/ospe.php?SES=09cfa797f46c4db2b3a2b8be3e2b8bc6&syid=199271&sid=199272&act=start





Latest Images