Sunday, October 20, 2013

How To Record Vocal

How To Record Vocal

Hi! I'm tangkk from Guangzhou, China. This lesson is for week 1 of Introduction to Music Production. I'm going to talk about how to record vocal.

Check list:

To record vocal, you need to have several things at hand: a Microphone, an XLR cable, an audio interface, a headphone, and a computer with audio recording software such as Adobe Audition or Audacity, or DAW such as Garageband or Cubase.

1. Microphone
First step is to choose your mic. Depending on what type of vocal you want to record, you may choose either dynamic mic or condenser mic. Choose dynamic mic if you want to record something of hard style such as rock, something that do not require every fine detail of the singing to be captured. Choose condenser mic if you're targeting at soft style music such as blues, pop or jazz. You should add a mic cover or wind protected shield in front of the mic to protect it from unpleasant noise generated by the singer such as hiss.

2. Audio Interface

Next, check your audio interface. Since signal from microphone is far below line level, it's very sensitive to noise. Thus mic signal should always be transmitted via XLR cable, which is balance. Make sure your audio interface has a XLR input port with input gain knob. The output of your audio interface should be a USB cable connect to your computer.

3. Cable
The XLR cables are not all the same. There are different qualities. Try to buy one with high quality. Whenever you can use a short cable, don't use a long one. The USB cable is also important since low quality USB may damage data integrity which leads to unexpected result. The rule of thumb is to always use the one provided with the audio interface, if not, use expensive ones.

4. Computer
Make sure the audio interface drive is properly installed in your computer. If you have your backing music already as a wav or mp3 file, you can use software like Audacity or Audition to record your vocal. If you're working on an music project, you can directly record the vocal part within your favorite DAW such as Cubase, Pro Tools, Logic, Reaper, etc. I recommend you install the ASIOv2 as well, since it reduces the delay from the vocal source to the computer.

5. Headphone
You also need to have a headphone to let the singer listen to the music while recording.

Before recording:

There are a couple of things you need to do before actually record.

1. Make the singer feel comfortable. To achieve this, you need to use a mic stand to place the mic at a suitable height. Don't put on the headphone while not recording since this will make the singer uncomfortable. You may also need to let the singer drink some water beforehand to ease his/her throat as well as the vocal cord a little bit so as to prevent unpleasant sound when recording.

2. Connection. That is to setup your work space. To do so, you first zero the input gain of the mic port, turn off the +48 phantom power if necessary, then turn off the audio interface; then plug in the mic using the XLR, turn on the audio interface, turn on the phantom power if you're using condenser mic, then turn up the input gain knob. Ask the singer to sing the song he/she wants to record, especially the loudest part, meanwhile you monitor the input level, adjust the input gain so that the loudest part of the singing is about -1dB to -3dB. Connect the headphone to the audio interface and pass it to the singer. Adjust the volume of the headphone. Then you're all set.

While recording:

While recording, it's your job to monitor everything including input level, output level and the overall quality of the recording. If either level goes beyond 0dB, you should consider adjust the input gain again. If at some point you notice the singer produces some unexpected sound or unpleasant sound, you could stop him/her and make him/her do one take again. You know, with those powerful recording software, you don't have to do everything from the top, but you can if you want something real natural and complete.

After recording:

When you finished recording, you should first disconnect the headphone by zeroing the output gain knob and unplug the jack. Then disconnect the mic by zeroing the input gain knob, switch off the phantom power if necessary and then unplug the jack. Then you can safely remove the audio interface from the computer. After that you can do whatever audio editing you like, such as compression, reverbation, EQ, normalization etc., in the DAW or recording studio software, before you finally export the audio mixdown.

This is the end of the lesson. I can't say I'm very good at vocal recording, but I have recorded about 30 of my own works. Summing up those experience I had, I found vocal recording depends greatly on the recording environment, the state of the singer as well as whether the song fits the expression range and style of the singer or not. If all these are good, the recording will almost always be successful, and the post-editing will be easy. Otherwise, there must be a lot of work in the editing stage and it still turns out not good enough. Thanks for your attention! I hope you like my lesson. Any feedback are welcome.


Tuesday, September 10, 2013

Pop Music Machine

The problem is: How can we invent an algorithm to generate great pop music melody? This problem seems to be a generation problem, but in fact it imply a conditional factor which is "great", so it is actually two problems: a generation problem and an evaluation problem.

Is the evaluation problem a simple "accept/reject" problem? Seems not. Pop music melody seems hard to be divided as good or bad. They are usually divided into several levels: very bad, bad, OK, good, very good, extremely good, etc. But since our requirement is to generate "great" music melody, the evaluation algorithm can be decided to only accept the melody at the top level and reject all the rest. Then it becomes an "accept/reject" problem. The next problem is what to accept? That is, what defines great pop music melody in terms of computer language? Can this standard of "acceptable" be evolved and updated over time? This is a big problem we're going to dig deep into in this article.

Once we have the evaluation algorithm, the generation part is obvious. Actually we can randomly generate musical sequence and feed them to the evaluation module. But is this the only approach? If it is, all we have to solve is only the evaluation problem. But can we do it smarter, to make the system generate an acceptable melody within much less time? To put it to the extreme, can we generate different acceptable melody every time? This is also what we're going to talk about in this article.

to be continued...

Wednesday, June 26, 2013


Here is the link for you to taste it:

This project is called "RandomMelody". It generates random melody while you draw on the sketch board, and the resulting graphics can uniquely reflect the melody history and sometimes even "predict" the future melody. See if you can "predict" the future from the graphics without looking at the source code. When opened, wait for some time until all things are loaded, then you can start to play! Note that when you click the button it plays, when you release the button, it stop immediately and play another note. When you drag, see what happen.



It is done by processing. Here are some of the code:

//The MIT License (MIT) - See Licence.txt for details
//Copyright (c) 2013 tangkk
// Abstract: This is an app making logically random guitar/piano mix melody by simply drawing on the screen

Maxim maxim;
AudioPlayer [] Piano;
int rann1 = 0;
int rann2 = 0;
int randrag1 = 0;
int randrag2 = 0;
boolean haveplayed = false;
void setup()
  size(768, 1024);
  maxim = new Maxim(this);
  Piano = loadAudio("Piano/Piano", ".wav", 22, maxim);
void draw()
void mouseDragged()
  // deal with the graphics
  float red = map(mouseX, 0, width, 0, 255);
  float blue = map(mouseY, 0, height, 0, 255);
  float green = dist(mouseX,mouseY,width/2,height/2);

  float speed = dist(pmouseX, pmouseY, mouseX, mouseY);
  float alpha = map(speed, 0, 20, 7, 10);
  float lineWidth = 1;

  fill(0, alpha);
  rect(width/2, height/2, width, height);

  stroke(red, green, blue, 255);

  float ran = random(1);
  if(ran > 0.3)
    brush1(mouseX, mouseY,speed, speed,lineWidth, Piano);
  if((ran > 0.2) && (ran <= 0.3))
    brush2(mouseX, mouseY,speed, speed,lineWidth, Piano);
  if((ran > 0.03) && (ran <= 0.2))
    brush3(mouseX, mouseY,speed, speed,lineWidth);
  if(ran <= 0.03)
    brush4(pmouseX, pmouseY,mouseX, mouseY,lineWidth);

  if (haveplayed == false) {
    randrag1 = (int)random(22);
    if(random(1) < 0.1) {
      haveplayed = true;
    randrag2 = (int)random(22);
    if(random(1) < 0.03) {
      haveplayed = true;

void mousePressed()
  rann1 = (int)random(22);
void mouseReleased()

  haveplayed = false;

  rann2 = (int)random(22);

//The MIT License (MIT) - See Licence.txt for details
//Copyright (c) 2013 tangkk

void brush1(float x,float y, float px, float py, float lineWidth, AudioPlayer []  Piano) {
//  line(x,y,width,0);
//  line(x,y,0,height);
//  line(x,y,width,height);

  int pitchSelect;
  int unit = height/21;
  System.out.println("unit: " + unit);
  pitchSelect = (int)(y/unit);
  if(pitchSelect < 0)
    pitchSelect = 0;
  if(pitchSelect > 21)
    pitchSelect = 21;
  System.out.println("pitchSelect: " + pitchSelect);


void brush2(float x,float y, float px, float py, float lineWidth, AudioPlayer [] Piano) {

  int pitchSelect = (int)(x+random(50))%22;
  if(pitchSelect < 0)
    pitchSelect = 0;
  if(pitchSelect > 21)
    pitchSelect = 21;
  System.out.println("pitchSelect: " + pitchSelect);
void brush3(float x,float y, float px, float py, float lineWidth) {
void brush4(float x,float y, float px, float py, float lineWidth) {
  triangle(px,py,width-x,height-y, width, height);
  triangle(width/2+((width/2)-px),py,width-(width/2+((width/2)-x)),height-y, width, 0);
  triangle(px,height/2+((height/2)-py),width-x,height-(height/2+((height/2)-y)), 0, height);
  triangle(width/2+((width/2)-px),height/2+((height/2)-py),width-(width/2+((width/2)-x)),height-(height/2+((height/2)-y)), 0, 0);

//The MIT License (MIT)
//Copyright (c) 2013 tangkk
//Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
//The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

AudioPlayer [] loadAudio(String stub, String extension, int numAudios, Maxim maxim)
  AudioPlayer [] Samples = new AudioPlayer[0];
  for(int i =0; i < numAudios; i++)
    AudioPlayer Sample = maxim.loadFile(stub+i+extension);
    if(Sample != null)
      Samples = (AudioPlayer [])append(Samples,Sample);
  return Samples;

Saturday, June 22, 2013

Take a photo of our feeling

Someday when I was walking in the campus, a fresh memory for no reason just come to my mind, reminding me of the days when I first stepping into this campus, when I was so excited about this brand new place, etc and etc. At that very moment, which just lasted for a few seconds, I was so happy, and was like this was really the first time I came to this place.

Sometimes we take photos to try to catch those treasurable moments in our lives. Although I still take photos these days, I seldom take a look at those photos. For me, most of the times those photos only remind us with what happened, but they don't really remind us of how we actually feel at those moments, like what I described above. It would be much more valuable if our feeling can also be freezed somewhere in the digital world, and we can access them whenever we want.

At least I can think of one good thing about this freezing. A couple live together for many years may lost their sense of favour, or love, towards each other. Sometimes couples try hard to grasp the memory of their first met or first kiss just to remember themselves how much they like each other at the beginning, but they failed. They took a lot of photos to remember those happy hours, but some of them still ended up divorces. Try to think of if feeling can be captured, stored, and instantly accessed, there will be no more difficulty for couples to recall their feeling how much they love each other. Would that be great?

Google Translate 谷歌翻译:





Wednesday, June 19, 2013

A platform for people to jam music in small public space

Being in a public space, such as a canteen or a plaza, you'll hear some music coming out of centralised speakers. Most of these music are of a soft style. They tend to be slow and smooth, without vocal part. Sometimes you feel like the background atmosphere could be made better. Or sometimes, you'd just like to show up your existence, send a message or to do something for fun. Or in some other times, you feel boring and just would like to find something to do. Then maybe you can  try to jam in the background music to make something interesting happen.

How about we invent a social network jamming platform(actually this has already been implemented by my team) fulfilling this need. Let's call it "WIJAM", meaning, "we instantly jam together over WIFI". The basic idea is easy: someone takes out his/her cell phone, opens an app or something, starts to create some melody, and the melody created is instantly mixed and broadcasted with the original background music and other melodies from other guys via a central speaker system.

But there are many problems underlying this scenario, and these problems lead to deeper research under the topic. The first problem naturally pop out is: do they really know how to jam? A huge problem. Assuming some of them are musical novices, they may have some very good music ideas in mind, but don't know how to express on traditional instrument layout such as a keyboard. OK, maybe we provide them with a fix musical scale, say a pentatonic scale or Ionian scale, so that they are within the scale. Then what about the key? Some simple background music use only a fix key without modulation. Even in with such background music, the novices still need to choose a key to apply the scale. How is it possible? Not to mention those grooves that change keys or even change available scales. No, we cannot let users to determine such a lot of things, we should leave them as comfortable and enjoyable as possible. This leads to the idea of a master controlling system to instantly assign keys and scales to all the users. Note that the final outcome of the jamming is playbacked via central speaker, this master system is also in charge of collecting the performance from players and distribute them to the speakers.

Yet this is far from the end of the story. As you know, the touch screen of most mobile phone is relatively small. If you ever had an experience playing a mobile phone keyboard, you probably think of it a bad experience because the keys on the screen are too small to touch. Back to our scenario, what should the master actually assign to the users? A real piano keyboard contains 88 notes, containing approximately 7 octaves. The number of all the notes of a C Ionian scale(which contains 7 identical pitch class) is approximately 7*7 = 49. But obviously we cannot afford all of the 49 notes on a single cell phone screen. Note that most of the time piano players only focus on 2-3 octaves in the middle of the keyboard layout, namely they seldom go to the very low pitch or very high pitch part. Also note that our scenario has background music already, which probably contain the bass part. So we can safely omit 3 lowest octaves and 2 highest octaves, keeping about 2-3 octaves, which still have 14-21 notes. This amount of notes is good enough for normal expression. But then how should we place these notes within the touch screen?

Actually this is one of the crucial issues that determine the successfulness of this application. It should be admitted that up till now no final decision is made upon this issue. One simple way we adopted at the beginning is to evenly dividing the screen into 4-row 2-column. With this the user has 8 note to express at a time. The master has several 8-note patterns to be ready to assign to the users and each of these pattern is within a certain scale such as Ionian or Dorian. The advantage of this layout is every note has a relatively large touchable area, and the novice user's expression freedom is under controlled to be within a scaled 8-note. But the disadvantages seems outweigh the advantages. For those whose want more notes, they can't get it until the master assign a new pattern. For normal users, they cannot differentiate the notes until they play and listen. They'll soon feel boring when they realize it's non intuitive to control their own expression. There are two facets in the drawbacks described above: 1. the expert users want more freedom, they need detailed control panel; 2. normal users want more control over their expression, they need intuitive control panel.

To cater for both users, the system design rule "leave it for the user" need to be applied, namely, two layouts should be designed and let users choose their preference. For the expert users, a note-based layout should be designed. It contains 16 - 32 notes, and should be layout in a systematically way that is easy to start with and difficult to be virtuoso at the same time. For the novice users, a graphic-based, or drawing-based layout should be preferred. It maps the drawing to up and down of the melody line, which corresponds to the feeling expressed by the users. The algorithm will determine which pitch to use according to a certain chord-scale combination provided by the master. If you are keen enough, you may notice that there is an issue regarding the drawing-based interface: how can the user express "rhythm"?

One approach is to use finger motion to signal a note-on of the new note and note-off of the old note, where the new note is mapped to the new position of the finger. For instruments such as guitar and piano we don't even need to signal the note-off most of the time, and it will be signaled automatically after the note finish decaying. This seems a workable approach, which we haven't implemented yet. Another approach is to use one finger to signal the rhythm, another finger to draw the melody. This is also interesting and quite convenient indeed.

So much for the interface issue. What about the overall performance outcome of this system. How to make all the performances by an ad-hoc group of scattering experts or novices make sense, fit together, or at least, sound good. This problem is two-folded. The easier and more fundamental one is how to make the mixture sounds good, while the more difficult one is to let it makes sense.

So how? To tackle the "sound good" problem is relatively easy. It demands something called algorithmic mixing and mastering. Theoretically(I do not have any source at this statement), the sum of any number of channels of any sound can be made comfortable to human ear as long as it is well mixed and mastered, regardless of the underling musical structure(such as, chord progression) or whatever. Namely, we can always manage to make it sounds comfortable. But the problem is how. As this has not yet been implemented, so it cannot be told for the moment. But let's make some guess. Say for example a very simple algorithm would be to set the volume of every channel to 1/n, where n is the number of channel. This makes sense, but not an ideal solution, as you may argue that what if some of the channel have higher weight. So here comes the problem, how to determine which channel has a higher weight? One approach is again to leave it to the user, but since we assume most of the users are musical novice, it may not work as wished. Note that our scenario is public space jamming, which is very different from an on stage live performance. On stage performance yields a sense of being focused, while our scenario yields a sense of scattering, in which every one enjoy themselves being hidden instead of being watched. So the "weight" doesn't convey the same meaning as before. Interestingly, people participating in this jamming will probably also wish their performance be heard by all other people. By combining these two observations, we can safely derive a bottom line of the auto mixing and mastering mechanism, which is to let every player at least be able to hear their part of contribution   from time to time. By this key finding, we can safely write an algorithm involving some randomness to implement this function. Not a big deal yet. 

The big deal is how to make the outcome make sense. To make sense, a logical expression of music is needed. It's more than to comfort our ear, but to comfort our mind. Since the basic assumption is the participators mostly are musical novice, this problem becomes an algorithmic composition problem. It could be a classical algorithmic composition problem when the musical content involved are pitches and timbres of standard instruments, or it could also be a new algorithmic composition problem when more sound synthesis elements are involved. Again this could be divided into two sub-problems. To tackle the classical one, we should attempt a more strict algorithmic composition approach, which contains a lot of AI stuff and far from fully developed yet. The algorithmic composition techniques can be found in lots of literature. While a "modern" algorithmic composition" problem, which aims to output some modern musical style such as electroacoustic, is relatively easier as I see, because the aesthetics of these are much more subjective than those music within the range of classical music theory. And for implementing the algorithm, there are several ways, one way is to implement the algorithm on user's side, therefore perhaps every user when jamming is within their own algorithmic logic. But this has disadvantage as you may easily notice is that what if their "logic" collide with each other, so that the outcome of the jamming is not so pleasant? True! So we have another approach. In this approach, the algorithm is implemented within the master side, therefore master can cooperate all the users to create a piece of good music. If implementing like this, the master is actually an algorithmic composer and conductor as well. A third approach would be both master and users implement the algorithms, while there is a feedback channel from master to the user, with the feedback indicating the user's algorithm to make a certain change to adjust itself to the whole performance. I think the third approach is the optimal one. Talking more about this topic is out of my range of ability for the moment, so I'd better stop talking here.

And there are still other issues. The audio engine, for now this project uses AUPreset +AudioUnit+AVSesssion to make sound, the AUPreset file points to the prerecored instrument samples by Apple Garageband. I don't know whether this will be legal or not(but according to the official statement it seems legal). Anyway, a more interesting and challenging approach would be to try to use the mobile STK, mobile csound, or ChucK for sound synthesis. Hopefully with one of these engine the size of the app can be greatly reduced, and the app is filled with some of the most advanced stuff as well. Another issue is whether to use OSC instead of MIDI as the music performance transportation media, since it seems OSC has a lower network transport latency. We'll see. A third issue, which is a big one, is the evaluation issue, and I'm gonna use the next two paragraphs to discuss it.

This kind of work, if ever published to the academic journals or papers, will definitely confront a problem of evaluation. In other research areas, such as computer architecture, the evaluation can be done quite straight forward. There will be indexes indicating whether an architecture(or a certain architecture improvement) is good or not, the most widely used ones being "performance", which indicates how fast a system runs. While in the field of computer music, the evaluation problem becomes much more ambiguous. How to determine whether a computer music system is good or bad? To narrow down the range of discussion, in what sense can we say that a network collaborative system is good. One approach that can be found in many papers is to "let the audience judge". In these papers, the "feedback" of the audience is presented, some of them being the subjective feeling, some of them being advises, some of them may also be questionnaires. Similarly, in a paper I saw the evaluation method is to post the outcomes of the computer music system on the web and let viewers rate them. Besides, there is "let the participants judge" method. The logic behind these two mentioned evaluation methods are all quite natural and obvious, since music is a aesthetic process, the final judgement should be made by human being.

But there is still another approach, which should be called "machine appreciation". This method should be implemented with "machine listening " in a very high level sense. Ordinary machine listening only cares about the audio material and the structure behind the audio material, while machine appreciation should demand a higher level of machine listening which cares about the musical material and also the musical structure. Of course every thing can be scaled down to a simplest case. In the case of machine appreciation in the context of classical music structure expression, the simplest algorithm only needs to take care of whether the notes are within scale or whether the voice leading obeys the rules. But of course, as you will agree, this is far from enough. Whether machine is able to appreciate music is itself also a big big question to be answered.

Nevertheless, we can use machine to do some measurement, such as whether by doing such and such the users are becoming more active, or whether such and such can make a collaborative system being more responsive. These are something machine can absolutely do.

So much for the evaluation part. Now I guess I've already given an introduction to this collaborative music jamming application in small public space. I've discussed the big scenario, the role of master and users, the user interface issues, the output quality issue as well as the evaluation issue. With all of these, I guess an extremely great public space jamming application can be created! Hopefully it can be done soon!

If you are a "master" and looking for free jam along tracks or backing tracks, here are some great jam-along tracks:

to be continued...





然而,这是从故事的结局。正如你所知道的,大多数手机的触摸屏比较小。如果你曾经有过的经验,打手机键盘,你可能会想到一个坏的经验,因为在屏幕上的按键太小,无法触摸。回到我们的场景,主实际上分配给用户?一个真正的钢琴键盘包含88笔记,约含7个八度。所有的音符一个C爱奥尼亚规模(其中包含7个相同间距类)约7 * 7 = 49。但很明显,我们不能负担所有的49个票据在一个单一的手机屏幕。请注意,大部分的时间钢琴玩家只专注于2-3个八度的键盘布局中,即他们很少去非常低的沥青或非常高音部分。另外请注意,我们的场景已经有背景音乐,这可能包含低音部分。因此,我们可以放心地忽略3个八度最低和2个八度最高,保持约2-3个八度,哪还有14-21音符。此票据金额是不够好,正常的表达。但后来,我们应该如何将这些笔记内的触摸屏?

其实,这是确定这个应用得失的关键问题之一。应当承认,直至后,这个问题现在还没有作出最后决定。我们在开始时通过一个简单的办法是均匀分割屏幕分为4行2列。与此用户有8注意表达一次。 8音符主有几个模式,准备分配给用户和这些图案是在一定的规模,如爱奥尼亚海或多利安。这种布局的优点是每一个音符,有一个比较大的可触摸区域,和新手用户的表达自由是在控制范围内的比例8音符。但缺点似乎远大于优点。对于那些希望更多的音符,他们不能得到它,直到主机分配一个新的模式。对于普通用户来说,他们不能区分票据,直到他们播放和收听。他们很快就会感到无聊的时候,他们意识到这是不直观的控制自己的表达。在上面描述的缺点有两个方面:1。专家用户想要更多的自由,他们需要详细的控制面板; 2。普通用户想他们表达的更多的控制权,他们需要直观的控制面板。

为了配合这两个用户,系统设计规则“离开它为用户”需要被应用,即,两个布局应设计,让用户选择自己的喜好。专家用户,注意布局应设计。它包含16 - 32个音符,应该有系统的方式,是容易下手,不易被演奏家同时布局。对新手来说,一个基于图形或绘图,基于布局应该是首选。它映射图中的向上和向下的旋律线,对应于用户所表达的感觉。该算法将确定,间距根据由主机提供特定和弦规模组合使用。如果你是热衷的话,你可能会注意到,有问题基于绘图接口:用户如何快车“节奏”?



因此,如何?为了解决“声音好”的问题是比较容易的。它要求的东西称为算法的混音和母带制作。 (我没有任何来源在本声明)从理论上讲,任何声音的任何数目的信道的总和可以只要人耳舒适,因为它是很好的混合和掌握,无论下属的音乐结构(例如,和弦进行)或什么的。也就是说,我们总能设法使这听起来很舒服。但问题是如何。随着这种情况尚未付诸实施,所以它不能被告知的时刻。但是让我们做出一些猜测。说,例如一个很简单的算法,将每个通道设置音量的1 / n,其中n是通道号。这是有道理的,但不是一个理想的解决方案,因为你可能会争辩说,如果一些通道具有更高的权重。所以这里问题来了,如何确定哪个通道有一个较高的权重?一种方法是再次离开给用户,但因为我们假设大多数用户都是音乐的新手,希望可能无法正常工作。请注意,我们的场景是公共空间的干扰,这是非常不同的舞台上现场表演。在舞台上的表现产生了一个被聚焦的感觉,而我们的方案中产生散射,每一个享受自己被隐藏,而不是被监视感。因此,“减肥”不传达以前相同的含义。有趣的是,人们参与这种干扰可能会也希望所有其他人听到他们的表现。通过这两个观测相结合,我们可以放心地得出一个底线的自动混音和母带的机制,这是为了让每个球员都至少能不时听到他们的贡献的一部分。这个关键的发现,我们可以放心地写一个算法涉及一些随机性来实现这个功能。没什么大不了的。


还有其他问题。音频引擎,现在这个项目,使用AUPreset + AudioUnit + AVSesssion使声,AUPreset的文件点的prerecored苹果的GarageBand乐器样本。我不知道这是否会是合法还是不合法(但根据官方的说法,似乎是合法的)。无论如何,一个更有趣和具有挑战性的做法是尝试使用移动STK移动与csound,或声音合成夹头。希望与这些发动机的应用程序的大小可以大大降低,并且与最先进的东西,以及一些填充该应用。另一个问题是,是否使用OSC代替MIDI音乐表演运输介质,因为它似乎OSC具有较低的网络传输延迟。让我们拭目以待。第三个问题,这是一个大的,是评价的问题,我会用接下来的两个段落讨论。






Thursday, June 6, 2013

Pop song melody as a hierarchical sequence

Consider a piece of pop song melody. You may say it is a sequence of pitches. But if I simply randomly put all those pitches into a MIDI sequence and playbacked by machine, it may probably become a dead sequence. Namely, it does not convey any logically meaning.

So the melody does not contain only pitches. Pitches are the most direct phenomenon we got, but it's more than pitches. To simply extend below, pitches are modulated by the rhythms. Even considering a simple 4/4 time bar with 6 notes inside, there are infinite possibilities of rhythms. Each of them express a different musical feeling. So we take also the rhythm into our previous MIDI sequence, it sounds more logical, meaningful,but somehow still feels dead.

Something is still missing. When a singer sings, she not only sings out pitches according to the rhythms, but also expresses her highs and lows at the same time. These emotional highs and lows can be related to the pitches' highs and lows, but not exactly the same. The emotional feelings adds another layer above the pitches, which is called "dynamics". Dynamics can be simply understood as the loudness or velocity of a note. As you know, we can still do an experiment of dynamics within our MIDI sequence, but, as you may guess, it's still not perfect.

Now we have pitch, rhythm and dynamics, and what else?

Most of the time a singer sings each pitch at a time, but sometimes she drags a pitch here and there. And still some other times, she put an unnoticeable slightly lower or higher pitch slightly before the pitch she's going to sing. By such, the singer creates a smooth flow of sound. This is called articulation. Can we do articulation in MIDI sequence? Yes, but not as easy as the previous three. Simply because there is no simple way to encode articulation. Our effort to emulate a beautiful melody using MIDI sequence can safely stop here. But anyway, we discover the 4th layer of a pop song melody.

So pop song melody as a hierarchical sequence, from bottom to top, as I see, is: 1. Rhythm; 2. Pitch; 3. articulation; 4. Dynamics.


Google Translate:






所以流行歌曲的旋律,从底部到顶部,我看到的,作为一个层次序列是:1。节奏2。间距; 3。衔接; 4。动力学分析。


Wednesday, June 5, 2013

On writting a simple mobile musical application

This article does not provide you with details on how to write such app, but on high level decisions and ideas on such issue.

First you need to decide what kind of musical application you want to write. The most common musical application would be an audio player. So let's talk about audio player first. For writing an audio player, you don't need to worry about MIDI or sequencer, all you need to consider is the playback mechanism. Say if you're gonna write such thing on Android, the first thing you need to do is search the keywords "audio", "playback", "audio playback" on the official website of Android development. Alternatively, Google is always there for you to gather essential information. Try to google "Android, audio, playback", and the useful stuff is there!

One thing really important is the sample code. If you got some sample code similar to what you want to achieve, then you're almost done. Or, if the sample code contains some very critical functions that is difficult to implement, you're very lucky because you don't need to worry about those functions any more. With the help of sample code, you don't have to start everything from scratch. What's more, sample code also trains your style of coding, to make you become a better programmer. Therefore I advise you to also search for one more keyword: "sample code".

The same things holds true for writing other applications such as those involve MIDI, such as a mobile piano. Since it involves MIDI manipulation, you need to know how MIDI is manipulated within the given mobile platform. Say if you're gonna write a simple piano for iOS. So you should google: "iOS, MIDI", or you can go to iOS developer library and search "MIDI". Don't forget about the "sample code". Except for MIDI, such application also involves a so called "virtual instrument". On iOS, this can be implemented using a type of file called "AUPreset". So search for this! Through some iterations, you can eventually arrive the same destination as all others did.

Usually, you can gain a clear picture of how to implement the functionality of the application relatively quickly. Then you sit down and write the code, use a week or two to debug. And then you realize that the real problem is not about the functionality, but the user interface design! Trust me! Until then you realize value of those design people!


Google Translate:







Monday, April 22, 2013

About Using AUPreset file to make virtual instruments within Xcode - pay attention to the sound file's name

First, there is existing materials about AUPreset, how to generate it and how to use it within xcode, which I don't bother to introduce here. Please see
and the related links provided there.

What I want to mention here is the outcome of my nearly 3 hours of debugging. I tried to use some files with name "F#2.caf", "G#2.caf" as my instrument sound. I put them into the "Sounds" folder, make an .aupreset file and point the file references to the right place with exactly those file names "F#2" "G#2". But it turns out that when I download this into my device, the instrument just won't work! Then I change the name to be "FF2.caf", "GG2.caf", it works. (Took me a long time to realize I should make this change)

Then I double check the working .aupreset file Trombone.aupreset, and found that although it uses some files called "1a#.caf", "2a#.caf", it references those as "1a%23.caf", "2a%23.caf". Oh! That's the problem. (It really took me a long time to discover the problem.)

I'm not sure whether there are any other file name constraints when doing aupreset. I can only suggest that following the convention of "Trombone.aupreset" is the best practice.

If you are reading this, you might be facing a similar problem. Try it!

(Plus: recently I also found that the "space" character" is not allowed in aupreset's file reference part.)


Google Translate:


在这里我要提的是我近3个小时的调试结果。我试图用一些文件名称“F#2.caf”,“G#2.caf的”我的乐器声音。我把它们放到“声音”文件夹,使。aupreset文件和文件指向到正确的地方,正是这些文件名“F#2”,“G#2”。但事实证明,当我下载到我的设备,仪器就无法正常工作!然后,我将名称更改为“FF2.caf”,“GG2.caf”,它的工作原理。 (我花了很长时间才意识到,我应该做这种改变)

然后,我双检查工作。aupreset的文件Trombone.aupreset的,并发现,虽然它使用的一些文件名为“1A#CAF”,“2A#咖啡馆”,它引用作为“1A%23.caf的”,“2A %23.caf“。哦!这就是问题。 (这真的花了我很长一段时间来发现问题。)





Friday, February 15, 2013

Random Square Music Player

This is an extremely cool music player, which keep randomly plays songs from your local audio resources on your selected EQ.
For those in need, it also contains a very cool guitar tuner(E4, B3, G3, D3, A2, E2).
The most attractive thing about this App is the simple and elegant modern style "six square" layout.
To manipulate the MIDI, you need to have a android-midi-libv2-1 library to play
with the Tuner.
Here is the link to Google Play Page of this little app:
Here is the link to the source of this app in github:
Or simply click the badge below:

Get it on Google Play


Friday, January 4, 2013

Install Simplescalar 3.0 on Ubuntu 12.04

My platform:
Intel® Core™ i5-3550 CPU
32bit Ubuntu 12.04

Here is the procedure for installing simplescalar on my computer, pay attention that this is written for newbies to linux therefore it may contain something that may not be so obvious to them. For those who does not need to use the cross-compiler and binutils,  step 5 alone is enough to setup simplescalar and run some binary form benchmarks.

1. Download the sources

Download simplesim-3v0e.tgz and simpletools-2v0.tgz from

Download simpleutils-990811.tar.gz (If you insist on using the simpleutils-2v0.tgz on the official page of simplescalar, stop here and try this installation guide instead)
Download Cross Compiler gcc-
You man also need the benchmark.

2. Setup the environment


export IDIR=/any/directory/you/choose/ (for example, mine is /home/tangkk/Build/simplescalar)
export HOST=i686-pc-linux
export TARGET=sslittle-na-sstrix
(Assuming your host is also little endian. It could be ssbig-na-sstrix. If this is the case, all the following appeared "sslittle-na-sstrix" should be replaced by "ssbig-na-sstrix".)

(Pay attention that if you ever quit the terminal, next time you loggin to the terminal you need to assign these values again. To give you some idea of what these setting may be, for the HOST part, i686 specifies your target CPU's architecture, other possible value maybe "arm", "mips", "i386", etc., and linux specifies your operating system, other possible value could be "solaris", "gnu"..., the "pc" part specifies the "company"; for the TARGET part, it specifies the endian. You can check out more information at [1])

(Further reading: Host and TargetConfiguration Names, Build,Host and Target)


make the IDIR directory by executing "mkdir $IDIR" and then move all the tarballs you've just downloaded into this directory.


make sure you update your softwares:

sudo apt-get update

make sure you install:

gcc-multilib(may be optional)
g++-multilib(may be optional)

If you haven't, do:

sudo apt-get install <package name>

3. Unpack the Simpletools-2v0

(Basically this package contains the sources of a gcc(GNU C Compiler) and a glibc ported to simplescalar architecture. And according to [7], building the glibc is non-trivial, thus this Simpletools package contains pre-compiled libraries in both ssbig-na-sstrix/ and sslittle-na-sstrix/ folder)

Simply do:

cd $IDIR

tar xvfz simpletools-2v0.tgz
rm -rf gcc-2.6.3

There is a gcc-2.6.3 folder, but you can remove it because later we will use the newer version gcc- a cross-compiler) instead.

(after this step you got ssbig-na-sstrix and sslittle-na-sstrix folders under the $IDIR, and they both contains an include folder and a lib folder)

4. Install the SimpleUtils-990811

(Basically this package contains the sources of a cross GUN binutils)

First we unpack the tarball:

cd $IDIR

tar xvfz simpleutils-990811.tar.gz
cd simpleutils-990811

Then, replace the "yy_current_buffer" by "YY_CURRENT_BUFFER" in the file /ld/ldlex.l .(You can to use "vim" or "gedit" to fix it.)

After that, you can configure:

cd $IDIR/simpleutils-990811

./configure --host=$HOST --target=$TARGET --with-gnu-as --with-gnu-ld --prefix=$IDIR
(If you want to know the meaning of these settings, refer to the [1]. Here because we are to build a "cross" GNU binutils, both HOST and TARGET should be specified)

(It should be successful configured. If not, check whether you follow all the above instructions before you ask someone. This is also strongly suggested for all the following steps.)

Then you can make and install with:


make install

(after this step you got another 5 folders under $IDIR, they are: bin, lib, include, man, share. And moreover, a bin folder is also added to both the ssbig-na-sstrix and sslittle-na-sstrix folder)

5. Install the Simplesim-3v0e

(This package contains the sources of the simplescalar architecture and micro-architecture(or we can say sources of the simplescalar functional simulator and performance simulator, which is basically referring to the same thing)

Firstly, unpack it:

cd $IDIR

tar xvfz simplesim-3v0e.tgz
cd simplesim-3.0
make config-pisa

(or make config-alpha according to your need
or if you want to change the target configuration from
one to another, you need to:
make clean
make config-alpha)

(Because the simulator runs on the hosts only, it is a native build. Everything should be OK and afterwards you will see "my work is done here...")

Run the following to test the simulator

./sim-safe tests/bin.little/test-math

(You will see something like 

sim: ** starting functional simulation **
pow(12.0, 2.0) == 144.000000
pow(10.0, 3.0) == 1000.000000
pow(10.0, -3.0) == 0.001000

6. Install the gcc cross compiler

This is the most error-prone part for the whole installation.

As always, unpack the file first:

cd $IDIR

tar xvfz
cd gcc-
./configure --host=$HOST --target=$TARGET --with-gnu-as --with-gnu-ld --prefix=$IDIR

so far so good! But we cannot proceed with a "make" here because there will be many errors because of various incompatibilities. To get over these, we need to fix some of the sources:

Firstly, for our convenience, do the following to get ourselves with write access to the current directory

chmod -R +w .
(Don't forget about the little "." at the end of this command!)

Following the instructions provided by [2], we have:

Appending "-I/usr/include" at the end of the 130 line of the Makefile


Replace <varargs.h> with <stdarg.h> in the 60 line of protoize.c


In the 341 line of obstack.h, change
*((void **)__o->next_free)++   to be *((void **)__o->next_free++)



cp ./patched/sys/cdefs.h ../sslittle-na-sstrix/include/sys/cdefs.h 

cp ../sslittle-na-sstrix/lib/libc.a ../lib/
cp ../sslittle-na-sstrix/lib/crt0.o ../lib/


Then build it[3].

make LANGUAGES="c c++" CFLAGS="-O" CC="gcc"

Then you will come across with errors regarding to insn-output.c, so we fix it by appending "\" at the end of the 675, 750 and 823 line of the file insn-output.c. (Note that if you "make clean" and then "make" again, you should do this once again because this file is generated by the build procedure.) Then make again by:

make LANGUAGES="c c++" CFLAGS="-O" CC="gcc"

Then you will come across something like 
"*** buffer overflow detected ***: /home/tangkk/simplescalar/sslittle-na-sstrix/bin/ar terminated"
This could be solved by methods suggested by [2], which is quoted here:
"2 At this point you might encounter a “buffer overflow” if you use Ubuntu 8.10 or later. If so,
download the following files and put them in $IDIR/sslittle-na-sstrix/bin:
(And I should add one more comment here for the newcomers:
cd $IDIR/sslittle-na-sstrix/bin/
chmod +x ar ranlib)

Now go back to your gcc- directory, after doing so and you can make once again

make LANGUAGES="c c++" CFLAGS="-O" CC="gcc"

Then you may come across something like
"In file included from /usr/include/features.h:389,
                 from /usr/include/limits.h:27,
                 from include/limits.h:112,
                 from include/syslimits.h:7,
                 from include/limits.h:11,
                 from ./libgcc2.c:1121:
/usr/include/gnu/stubs.h:7: gnu/stubs-32.h: No such file or directory"

This maybe a bug in ubuntu but I don't know for sure. Refering to [4], it can be solved by appending "-I/usr/include/i386-linux-gnu" at the end of the line 130 of Makefile. (If it still doesn't work, install  gcc-multilib and g++-multilib and try again)

(So now the 130 line of Makefile becomes:
GCC_CFLAGS=$(INTERNAL_CFLAGS) $(X_CFLAGS) $(T_CFLAGS) $(CFLAGS) -I./include -I/usr/include -I/usr/include/i386-linux-gnu

Then you make again:

make LANGUAGES="c c++" CFLAGS="-O" CC="gcc"

And still there is error regarding to the generated file cxxmain.c. It can be solved by the method suggested by [5], namely, simply commenting out the 2978‐2979 line of cxxmain.c. Then make again.

make LANGUAGES="c c++" CFLAGS="-O" CC="gcc"

There should be no error this time.

7. Make enquire
Also under the "$IDIR/simplescalar/gcc-" directory, do

make enquire

Then it will pop out some error as: "undefined reference to `__isoc99_sscanf", which, referring to [6], could be solved by adding "-D_GNU_SOURCE" after the "$(ENQUIRE_CFLAGS)" in the line 995 of the Makefile under the directory (namely $IDIR/simplescalar/gcc- Then make again

make enquire

Finally we are done!

Finally we follow the rest of those old references to install the cross compiler:

../simplesim-3.0/sim-safe ./enquire -f > float.h-cross
make LANGUAGES="c c++" CFLAGS="-O" CC="gcc" install

8. Hello World!
We can check it with a hello.c program. Use vi to write the following program called hello.c:

#include <stdio.h>
main() {
  printf("Hello World!\n");

Then compile it with the cross compiler:

$IDIR/bin/sslittle-na-sstrix-gcc –o hello hello.c
(If you fail to run this, try entering the above "bin" directory and run "./sslittle-na-sstrix-gcc -o ../hello ../hello.c" assuming that the hello.c program is within the $IDIR directory. But if you still fail, maybe there is some chaos with your environment viable, restart the terminal and try it once again)

The you can run the output file "hello" with the simulator:

$IDIR/simplesim-3.0/sim-safe hello

If you see something like this:

sim: ** starting functional simulation **
Hello World!


For testbenches: the tests-alpha/ and tests-pisa/ folders under simplesim-3.0/ contains some benchmarks, also there are some instructor's benchmarks here.

To remind once again, if you get stuck in any of these steps, check whether you strictly follow all the previous steps first before you ask someone else.


[7] A User’s and Hacker’s Guide to the SimpleScalar Architectural Research Tool Set (for tool set release 2.0), Todd M. Austin, 1997

Also thanks for the page,  although I did not reference it! It contains a lot of useful links.

The following link is a very good Lab on how to use simplescalar:
whose parent link:
contains a lot of useful tools on understanding computer architecture and organization as well.