[ PRODUCTS ]

 

MIDI.DIGITSTER.COM

 

sample

AU LABORATORY

 

Our developments are always in touch with the future.

Here we will post interesting projects of conceptual nature or developments that are in early state. Some of those apps may see the daylight anywhen, Others may merely for internal research or for delivering components, using in later pruduct releases. 

 




 

Sampler Series

We have created prototypes for our new SAMPLER SERIES


 



The MINISAMPLER allows creating sampler presets from selectable audio files based on easy usable templates.

Our upcoming SAMPLER SERIES audio units are designed for simplicity and performance in mind. Currently iOS is lacking a valuable toolset for basic sampling and creating sampling libraries, especially in AudioUnit environments. Our products will fill the gap quite a bit. 

MINISAMPLER and AUTOSAMPLER are based on Apples AUSampler file format, which essentially can be shared with the Mac platform (Logic Pro x, Mainstage and Garage Band). A template based sound design helps to keep simplicity and enables even users with few experiences to compile  sampler based presets on the fly and use it for music production and live performances.

The preview manuals for JAX MINISAMPLER and JAX AUTOSAMPLER can be found here too. 

 



The AUTOSAMPLER allows sampling from tone generators inside supported AudiUnit environments.



The Audio Recorder can record 8 audio streams and play it back via MIDI note messages.

 

New UI Design

We are currently experimenting with new UI designs


 



BERLIN SERIES Charlottenburg is a massive polyphonic analog modelled hybrid bolide.

 

While our current vector based graphic design is well thought, lossless scalable and very resource efficient, we see that users often find it somewhat limited. Often also users are sticking to the common static user interface paradigms, which they know from the desktop computers. So we experimented with 3D modelling and picture based UI design lately and here are some interesting results. 

For our upcoming synthesizer line, the BERLIN SERIES, we created some UI prototypes. The user interface creation is based on 3D modeling, combined with vector images. We will not use real 3D environments for the releases, but rather render everything to images and design it the traditional way, meaning we use so-called 'movie images' for the moving 3D controls and display further elements as vector graphics overlays. 

Creating such user interface design is extremely 'geeky', time consuming and difficult, and requires allot of experience. In fact we used 4 different modelling and specialized graphics applications to compile the required results. which must fit certain coding requirements. Very often we had to revert the results of one design stage because they did not work for an number of different reasons.

Theroretical it is possible to compile Audio Units with real 3D environments, as Apple even provides the foundations for such apps, but we think that audio performance and DSP should have absolute priority and real 3D for some simple knob and button movements is not really necessary.

 



BERLIN SERIES Tempelhof is a sampler based drum synthesizer/groovebox


BERLIN SERIES Moabit is a virtual analog bass synthesizer

 

 

JAX Variable Latency Delay

Slow down or speed up your audio in realtime with a time modification audio unit


 

 

The JAX Variable Latency Delay is a special realtime effect, that is based on an audio research project with the goal of modifying the playback speed with a sample-rate-constant stream. With or without optional pitch compensation. (It was always a dream of us, to create this kind of time machine.)

Streaming audio usually is a continuous process with a fixed sample rate, which makes it quite difficult to change the speed oft the playback without getting certain problems and discontinuity. Slowing down or speeding up the stream would result in inconsistencies of the audio flow, breaking all the rules. The time cannot be modified in nature.

So we did sit down and switched our brains on, to find a way to overcome these fixed paradigm somehow and came up with a certain musically useful technique in realtime audio processing. ( Note: We do not talk about any kind of offline processing here, where this can be reached rather with ease. )

We wanted to use Apples Audio Unit API to realise this and everyone knows, that an audio unit cannot change the sample rate dynamically, nor is it even possible to request a certain sample rate change from the host or to change the sample rate on the fly ( dynamically ) by the host itself with any Apple device.

( Apples realtime audio system very much suffers from the fact, that it is fixed to a certain driver dependent specific sample rate. This is an issue by design and it causes allot of the current fundamental problems on all latest Apple devices, including many of existing audio units, that actually do processing with wrong sample rates, for instance. Mostly this is, because many (host and plugin) developers just do not understand how Apples audio system is supposed to work and will maximise these problems additionally. It seems that Apple sometimes does not even understand this audio system themselves anymore. We think, this is a misconceptual issue. )

However, our approach will be a delay based mechanism, which of course has some limitations too, but actually is able to slow down and speed up continuous audio streams in realtime to a certain amount and with certain rules applied to this process.

The central function mechanism is reached by a delay (latency) buffer with definable length and variable speed circular reading and constant writing pointers into that buffer. The size of the delay buffer can be adjusted (none realtime) to the tempo of the host with a certain size, so the resulting effect becomes controllable.

The writing pointer to this audio buffer will be always constant in speed, as this is a requirement to keep continuity of the constant audio stream. But the reading pointer now actually can for instance become slower, giving the impression of a temporary speed reduction in a certain timeframe.

If the reading pointer is anywhere behind the writing pointer, it can be speed up to the maximum position of the writing pointer again, speeding up again. The reading pointer never must exceed the writing pointer, of course and the reading pointer also never should become too slow or stuck or into any other overrun, which would destroy audio flow.

The principle resolves very much to the fact, that at first, the audio must be slowed down, before it can be speeded up again. The size of the buffer therefore very much affects the sonic result (time dependency) of this effect. It should be set to musically useful sizes, probably 4 or 8 musical bars, dependent of the hosts tempo and division and dependent of the desired audible effect length.

The entire processing is designed in a way, that allows dynamic speed changes rather than so-called 'halftime' speed effects with a fixed ratio. Side note: Some continuous time modification effects like chorusses will use the same principle but with much smaller circular buffers and some low frequency oscillation components.

Speeding down and up audio will naturally and additionally result in pitch changes. This is very natural. It can be compensated with proportionally connected pitch shifters of any kind, which actually will virtually compensate the "wrong" pitch if applied in inverse direction.

This way, a constant pitch can be emulated. The sonic effect is then the impression of a tempo change without audible pitching effects, a result, which is impossible in nature, because it actually would require to modify the TIME directly, which up to today is just impossible to do inside the real world.

However, it is easily possible in the digital domain and with other special artificial concepts, used in movies for instance (i.e. slow motion) . These methods all have something in common. They attempt to trick the TIME in the manner of creating an illusion.

 


The JAX PANDA Pianist

Multimodel Virtual Grand Piano MIDI Module


The JAX Panda Pianist, grown from a personal research project, is an economic multi model concert grand piano collection (a special virtual sound module) and a virtuoso piano performer at same time.

The AudioUnit deliverers more than 16 different piano models*, where we focused on the authentic emulation of high-range velocity dynamics and realistic body and string resonances. The virtuoso is inbuilt and will perform anytime you want him to do so.

The user may select from the different sound models, adopted from the worlds most famous concert pianos and has allot of extra tonal control over each of these models. The (small) single models can be removed and re-downloaded on demand from the internet, so that users can mange their precious device space as needed.

The JAX Panda Pianist can be used as full featured MIDI controllable sound module for realtime performances inside any audio unit host application. The inbuilt MIDI player enables to select recorded performances by real pianists of various expertises, even with the audio unit, or just for playing these performances randomly for listening and learning. The speed of the MIDI performance playback can be adjusted freely and is independent of the host tempo, because the delivered recording have no mapped MIDI tempo information.

Our piano models are optimized but still sample based. Instead of blindly sampling tons of static velocity layers from real grand pianos, we developed our high performance “economic” piano modeling technology, specifically for usage with mobile devices in mind. A single piano model does not need more than 60 to 80 MB of expanded memory on the users device, while giving much better expressive sonic results than many gigabyte heavy multi velocity sampled pianos on the sampler market today. Disk streaming also has allot of potential of disturbing the sound and its continuity. It is also often a problem when embedding such memory eating monsters into a working setup.

 

 

We carefully analyzed loads of available sampled Grand Pianos and we found that most of them have quite audible problems, especially with velocity and dynamics. The sampled velocity layers are mostly much too raw to really give correct dynamic playing or finer nuances of playing any chance. Some sampling libraries just compensate missing dynamics with scaling loudness, but the result is not satisfying with this. Also the static sampled string resonances and the body tone of a static sampled piano can lead to audible problems in some circumstances and just sampled libraries will not give adjustable choices at all for optimizing the sonic result to the needs. Very often the sampled string resonances and ambiences will even sum up to unnatural sound bursting.

If we talk about dynamics, we do not mean just leveling. A piano sounds quite different when played softly, than played aggressively. The tone must be dynamically filtered and adjusted in attack and decay. Without an extra dynamic processor most sampled pianos are hardly to enjoy.

The included virtuoso MIDI files prove our concept. Many of them are played with very fine nuances of dynamic, which would just sound strange or even bold with many other (just sampled) pianos on the market.

We also know, that piano sound is very much a matter of taste, and the taste even changes time by time and so we hope, to deliver a versatile sound module, that can satisfy any demands and preferences with this unique release.

Reverb is the best friend of a piano, we know. But we did not include an extra digital reverb unit. If high quality reverberation is needed, we recommend to use an external high quality reverb, like our JAX Convolutor PRO. It can emulate all the rooms and spaces of this world and much more.