Wednesday, June 19, 2013

A platform for people to jam music in small public space

Being in a public space, such as a canteen or a plaza, you'll hear some music coming out of centralised speakers. Most of these music are of a soft style. They tend to be slow and smooth, without vocal part. Sometimes you feel like the background atmosphere could be made better. Or sometimes, you'd just like to show up your existence, send a message or to do something for fun. Or in some other times, you feel boring and just would like to find something to do. Then maybe you can  try to jam in the background music to make something interesting happen.

How about we invent a social network jamming platform(actually this has already been implemented by my team) fulfilling this need. Let's call it "WIJAM", meaning, "we instantly jam together over WIFI". The basic idea is easy: someone takes out his/her cell phone, opens an app or something, starts to create some melody, and the melody created is instantly mixed and broadcasted with the original background music and other melodies from other guys via a central speaker system.

But there are many problems underlying this scenario, and these problems lead to deeper research under the topic. The first problem naturally pop out is: do they really know how to jam? A huge problem. Assuming some of them are musical novices, they may have some very good music ideas in mind, but don't know how to express on traditional instrument layout such as a keyboard. OK, maybe we provide them with a fix musical scale, say a pentatonic scale or Ionian scale, so that they are within the scale. Then what about the key? Some simple background music use only a fix key without modulation. Even in with such background music, the novices still need to choose a key to apply the scale. How is it possible? Not to mention those grooves that change keys or even change available scales. No, we cannot let users to determine such a lot of things, we should leave them as comfortable and enjoyable as possible. This leads to the idea of a master controlling system to instantly assign keys and scales to all the users. Note that the final outcome of the jamming is playbacked via central speaker, this master system is also in charge of collecting the performance from players and distribute them to the speakers.

Yet this is far from the end of the story. As you know, the touch screen of most mobile phone is relatively small. If you ever had an experience playing a mobile phone keyboard, you probably think of it a bad experience because the keys on the screen are too small to touch. Back to our scenario, what should the master actually assign to the users? A real piano keyboard contains 88 notes, containing approximately 7 octaves. The number of all the notes of a C Ionian scale(which contains 7 identical pitch class) is approximately 7*7 = 49. But obviously we cannot afford all of the 49 notes on a single cell phone screen. Note that most of the time piano players only focus on 2-3 octaves in the middle of the keyboard layout, namely they seldom go to the very low pitch or very high pitch part. Also note that our scenario has background music already, which probably contain the bass part. So we can safely omit 3 lowest octaves and 2 highest octaves, keeping about 2-3 octaves, which still have 14-21 notes. This amount of notes is good enough for normal expression. But then how should we place these notes within the touch screen?

Actually this is one of the crucial issues that determine the successfulness of this application. It should be admitted that up till now no final decision is made upon this issue. One simple way we adopted at the beginning is to evenly dividing the screen into 4-row 2-column. With this the user has 8 note to express at a time. The master has several 8-note patterns to be ready to assign to the users and each of these pattern is within a certain scale such as Ionian or Dorian. The advantage of this layout is every note has a relatively large touchable area, and the novice user's expression freedom is under controlled to be within a scaled 8-note. But the disadvantages seems outweigh the advantages. For those whose want more notes, they can't get it until the master assign a new pattern. For normal users, they cannot differentiate the notes until they play and listen. They'll soon feel boring when they realize it's non intuitive to control their own expression. There are two facets in the drawbacks described above: 1. the expert users want more freedom, they need detailed control panel; 2. normal users want more control over their expression, they need intuitive control panel.

To cater for both users, the system design rule "leave it for the user" need to be applied, namely, two layouts should be designed and let users choose their preference. For the expert users, a note-based layout should be designed. It contains 16 - 32 notes, and should be layout in a systematically way that is easy to start with and difficult to be virtuoso at the same time. For the novice users, a graphic-based, or drawing-based layout should be preferred. It maps the drawing to up and down of the melody line, which corresponds to the feeling expressed by the users. The algorithm will determine which pitch to use according to a certain chord-scale combination provided by the master. If you are keen enough, you may notice that there is an issue regarding the drawing-based interface: how can the user express "rhythm"?

One approach is to use finger motion to signal a note-on of the new note and note-off of the old note, where the new note is mapped to the new position of the finger. For instruments such as guitar and piano we don't even need to signal the note-off most of the time, and it will be signaled automatically after the note finish decaying. This seems a workable approach, which we haven't implemented yet. Another approach is to use one finger to signal the rhythm, another finger to draw the melody. This is also interesting and quite convenient indeed.

So much for the interface issue. What about the overall performance outcome of this system. How to make all the performances by an ad-hoc group of scattering experts or novices make sense, fit together, or at least, sound good. This problem is two-folded. The easier and more fundamental one is how to make the mixture sounds good, while the more difficult one is to let it makes sense.

So how? To tackle the "sound good" problem is relatively easy. It demands something called algorithmic mixing and mastering. Theoretically(I do not have any source at this statement), the sum of any number of channels of any sound can be made comfortable to human ear as long as it is well mixed and mastered, regardless of the underling musical structure(such as, chord progression) or whatever. Namely, we can always manage to make it sounds comfortable. But the problem is how. As this has not yet been implemented, so it cannot be told for the moment. But let's make some guess. Say for example a very simple algorithm would be to set the volume of every channel to 1/n, where n is the number of channel. This makes sense, but not an ideal solution, as you may argue that what if some of the channel have higher weight. So here comes the problem, how to determine which channel has a higher weight? One approach is again to leave it to the user, but since we assume most of the users are musical novice, it may not work as wished. Note that our scenario is public space jamming, which is very different from an on stage live performance. On stage performance yields a sense of being focused, while our scenario yields a sense of scattering, in which every one enjoy themselves being hidden instead of being watched. So the "weight" doesn't convey the same meaning as before. Interestingly, people participating in this jamming will probably also wish their performance be heard by all other people. By combining these two observations, we can safely derive a bottom line of the auto mixing and mastering mechanism, which is to let every player at least be able to hear their part of contribution   from time to time. By this key finding, we can safely write an algorithm involving some randomness to implement this function. Not a big deal yet. 

The big deal is how to make the outcome make sense. To make sense, a logical expression of music is needed. It's more than to comfort our ear, but to comfort our mind. Since the basic assumption is the participators mostly are musical novice, this problem becomes an algorithmic composition problem. It could be a classical algorithmic composition problem when the musical content involved are pitches and timbres of standard instruments, or it could also be a new algorithmic composition problem when more sound synthesis elements are involved. Again this could be divided into two sub-problems. To tackle the classical one, we should attempt a more strict algorithmic composition approach, which contains a lot of AI stuff and far from fully developed yet. The algorithmic composition techniques can be found in lots of literature. While a "modern" algorithmic composition" problem, which aims to output some modern musical style such as electroacoustic, is relatively easier as I see, because the aesthetics of these are much more subjective than those music within the range of classical music theory. And for implementing the algorithm, there are several ways, one way is to implement the algorithm on user's side, therefore perhaps every user when jamming is within their own algorithmic logic. But this has disadvantage as you may easily notice is that what if their "logic" collide with each other, so that the outcome of the jamming is not so pleasant? True! So we have another approach. In this approach, the algorithm is implemented within the master side, therefore master can cooperate all the users to create a piece of good music. If implementing like this, the master is actually an algorithmic composer and conductor as well. A third approach would be both master and users implement the algorithms, while there is a feedback channel from master to the user, with the feedback indicating the user's algorithm to make a certain change to adjust itself to the whole performance. I think the third approach is the optimal one. Talking more about this topic is out of my range of ability for the moment, so I'd better stop talking here.

And there are still other issues. The audio engine, for now this project uses AUPreset +AudioUnit+AVSesssion to make sound, the AUPreset file points to the prerecored instrument samples by Apple Garageband. I don't know whether this will be legal or not(but according to the official statement it seems legal). Anyway, a more interesting and challenging approach would be to try to use the mobile STK, mobile csound, or ChucK for sound synthesis. Hopefully with one of these engine the size of the app can be greatly reduced, and the app is filled with some of the most advanced stuff as well. Another issue is whether to use OSC instead of MIDI as the music performance transportation media, since it seems OSC has a lower network transport latency. We'll see. A third issue, which is a big one, is the evaluation issue, and I'm gonna use the next two paragraphs to discuss it.

This kind of work, if ever published to the academic journals or papers, will definitely confront a problem of evaluation. In other research areas, such as computer architecture, the evaluation can be done quite straight forward. There will be indexes indicating whether an architecture(or a certain architecture improvement) is good or not, the most widely used ones being "performance", which indicates how fast a system runs. While in the field of computer music, the evaluation problem becomes much more ambiguous. How to determine whether a computer music system is good or bad? To narrow down the range of discussion, in what sense can we say that a network collaborative system is good. One approach that can be found in many papers is to "let the audience judge". In these papers, the "feedback" of the audience is presented, some of them being the subjective feeling, some of them being advises, some of them may also be questionnaires. Similarly, in a paper I saw the evaluation method is to post the outcomes of the computer music system on the web and let viewers rate them. Besides, there is "let the participants judge" method. The logic behind these two mentioned evaluation methods are all quite natural and obvious, since music is a aesthetic process, the final judgement should be made by human being.

But there is still another approach, which should be called "machine appreciation". This method should be implemented with "machine listening " in a very high level sense. Ordinary machine listening only cares about the audio material and the structure behind the audio material, while machine appreciation should demand a higher level of machine listening which cares about the musical material and also the musical structure. Of course every thing can be scaled down to a simplest case. In the case of machine appreciation in the context of classical music structure expression, the simplest algorithm only needs to take care of whether the notes are within scale or whether the voice leading obeys the rules. But of course, as you will agree, this is far from enough. Whether machine is able to appreciate music is itself also a big big question to be answered.

Nevertheless, we can use machine to do some measurement, such as whether by doing such and such the users are becoming more active, or whether such and such can make a collaborative system being more responsive. These are something machine can absolutely do.

So much for the evaluation part. Now I guess I've already given an introduction to this collaborative music jamming application in small public space. I've discussed the big scenario, the role of master and users, the user interface issues, the output quality issue as well as the evaluation issue. With all of these, I guess an extremely great public space jamming application can be created! Hopefully it can be done soon!

If you are a "master" and looking for free jam along tracks or backing tracks, here are some great jam-along tracks:

to be continued...





然而,这是从故事的结局。正如你所知道的,大多数手机的触摸屏比较小。如果你曾经有过的经验,打手机键盘,你可能会想到一个坏的经验,因为在屏幕上的按键太小,无法触摸。回到我们的场景,主实际上分配给用户?一个真正的钢琴键盘包含88笔记,约含7个八度。所有的音符一个C爱奥尼亚规模(其中包含7个相同间距类)约7 * 7 = 49。但很明显,我们不能负担所有的49个票据在一个单一的手机屏幕。请注意,大部分的时间钢琴玩家只专注于2-3个八度的键盘布局中,即他们很少去非常低的沥青或非常高音部分。另外请注意,我们的场景已经有背景音乐,这可能包含低音部分。因此,我们可以放心地忽略3个八度最低和2个八度最高,保持约2-3个八度,哪还有14-21音符。此票据金额是不够好,正常的表达。但后来,我们应该如何将这些笔记内的触摸屏?

其实,这是确定这个应用得失的关键问题之一。应当承认,直至后,这个问题现在还没有作出最后决定。我们在开始时通过一个简单的办法是均匀分割屏幕分为4行2列。与此用户有8注意表达一次。 8音符主有几个模式,准备分配给用户和这些图案是在一定的规模,如爱奥尼亚海或多利安。这种布局的优点是每一个音符,有一个比较大的可触摸区域,和新手用户的表达自由是在控制范围内的比例8音符。但缺点似乎远大于优点。对于那些希望更多的音符,他们不能得到它,直到主机分配一个新的模式。对于普通用户来说,他们不能区分票据,直到他们播放和收听。他们很快就会感到无聊的时候,他们意识到这是不直观的控制自己的表达。在上面描述的缺点有两个方面:1。专家用户想要更多的自由,他们需要详细的控制面板; 2。普通用户想他们表达的更多的控制权,他们需要直观的控制面板。

为了配合这两个用户,系统设计规则“离开它为用户”需要被应用,即,两个布局应设计,让用户选择自己的喜好。专家用户,注意布局应设计。它包含16 - 32个音符,应该有系统的方式,是容易下手,不易被演奏家同时布局。对新手来说,一个基于图形或绘图,基于布局应该是首选。它映射图中的向上和向下的旋律线,对应于用户所表达的感觉。该算法将确定,间距根据由主机提供特定和弦规模组合使用。如果你是热衷的话,你可能会注意到,有问题基于绘图接口:用户如何快车“节奏”?



因此,如何?为了解决“声音好”的问题是比较容易的。它要求的东西称为算法的混音和母带制作。 (我没有任何来源在本声明)从理论上讲,任何声音的任何数目的信道的总和可以只要人耳舒适,因为它是很好的混合和掌握,无论下属的音乐结构(例如,和弦进行)或什么的。也就是说,我们总能设法使这听起来很舒服。但问题是如何。随着这种情况尚未付诸实施,所以它不能被告知的时刻。但是让我们做出一些猜测。说,例如一个很简单的算法,将每个通道设置音量的1 / n,其中n是通道号。这是有道理的,但不是一个理想的解决方案,因为你可能会争辩说,如果一些通道具有更高的权重。所以这里问题来了,如何确定哪个通道有一个较高的权重?一种方法是再次离开给用户,但因为我们假设大多数用户都是音乐的新手,希望可能无法正常工作。请注意,我们的场景是公共空间的干扰,这是非常不同的舞台上现场表演。在舞台上的表现产生了一个被聚焦的感觉,而我们的方案中产生散射,每一个享受自己被隐藏,而不是被监视感。因此,“减肥”不传达以前相同的含义。有趣的是,人们参与这种干扰可能会也希望所有其他人听到他们的表现。通过这两个观测相结合,我们可以放心地得出一个底线的自动混音和母带的机制,这是为了让每个球员都至少能不时听到他们的贡献的一部分。这个关键的发现,我们可以放心地写一个算法涉及一些随机性来实现这个功能。没什么大不了的。


还有其他问题。音频引擎,现在这个项目,使用AUPreset + AudioUnit + AVSesssion使声,AUPreset的文件点的prerecored苹果的GarageBand乐器样本。我不知道这是否会是合法还是不合法(但根据官方的说法,似乎是合法的)。无论如何,一个更有趣和具有挑战性的做法是尝试使用移动STK移动与csound,或声音合成夹头。希望与这些发动机的应用程序的大小可以大大降低,并且与最先进的东西,以及一些填充该应用。另一个问题是,是否使用OSC代替MIDI音乐表演运输介质,因为它似乎OSC具有较低的网络传输延迟。让我们拭目以待。第三个问题,这是一个大的,是评价的问题,我会用接下来的两个段落讨论。






No comments:

Post a Comment