When doing audio and video editing, people pay more attention to video development, keen on FFmpeg, OpenGL these skills, in fact, audio development is also very important, even can say that audio development is a bit more difficult than video development.

Libsonic and LibSoundTouch are the two most common methods of handling audio, such as changing speed, changing tone, and mixing. These are usually done by open source libraries.

This article will briefly introduce the use of libsonic.

Libsonic access

Libsonic is an open source library that supports audio playback at speeds greater than 2x.

Android.googlesource.com/platform/ex…

You can download the libsonic library using this link:

git clone git://github.com/waywardgeek/sonic.git

The contents of this repository are quite complete, including Java and C version of the implementation and use of the demonstration, it is very convenient to do the migration.

If all you need to do is change gears and volume, use sonic_lite.h and sonic_lite.c files, which are tailored for functionality.

The libsonic home page also provides a repository for the Android NDK version, which can be downloaded from this link:

git clone git://github.com/waywardgeek/sonic-ndk.git

However, some of this version is too old, or Android. Mk access mode, now are changed to CMake access.

If you are interested, you can write a package library for the sonic NDK CMake version. Maybe you can get a star on Github.

Libsonic use

Libsonic doesn’t have a lot of calling interfaces, so you can refer to the code demo for details.

The following is an audio times the use of the code:


/* Run sonic_lite. */

static void runSonic(char* inFileName, char* outFileName, float speed, float volume) {

  waveFile inFile, outFile = NULL;

// Define input and output buffers

  short inBuffer[SONIC_INPUT_SAMPLES], outBuffer[SONIC_INPUT_SAMPLES];

  int samplesRead, samplesWritten, sampleRate, numChannels;

// Open the input file and get information about the input file

  inFile = openInputWaveFile(inFileName, &sampleRate, &numChannels);

  if(numChannels ! =1) {

    fprintf(stderr, "sonic_lite only processes mono wave files. This file has %d channels.\n",

        numChannels);

    exit(1);

  }

  if(sampleRate ! = SONIC_SAMPLE_RATE) {fprintf(stderr,

        "sonic_lite only processes wave files with a sample rate of %d Hz. This file uses %d\n",

        SONIC_SAMPLE_RATE, sampleRate);

    exit(1);

  }

  if (inFile == NULL) {

    fprintf(stderr, "Unable to read wave file %s\n", inFileName);

    exit(1);

  }

// Open the output file

  outFile = openOutputWaveFile(outFileName, sampleRate, 1);

  if (outFile == NULL) {

    closeWaveFile(inFile);

    fprintf(stderr, "Unable to open wave file %s for writing\n", outFileName);

    exit(1);

  }

// Initialize sonic and set speed and volume

  sonicInit(a);sonicSetSpeed(speed);

  sonicSetVolume(volume);

  do {

// Read sample from the input file

    samplesRead = readFromWaveFile(inFile, inBuffer, SONIC_INPUT_SAMPLES);

    if (samplesRead == 0) {

      sonicFlushStream(a); }else {

// Pass the read sample to sonicStream in Sonic

// sonicStream takes care of speed and volume

      sonicWriteShortToStream(inBuffer, samplesRead);

    }

// Add sonicStream data to outBuffer

// Write outBuffer to the output file

    do {

      samplesWritten = sonicReadShortFromStream(outBuffer, SONIC_INPUT_SAMPLES);

      if (samplesWritten > 0) {

        writeToWaveFile(outFile, outBuffer, samplesWritten); }}while (samplesWritten > 0);

  } while (samplesRead > 0);

  closeWaveFile(inFile);

  closeWaveFile(outFile);

}

Copy the code

This code example is fairly straightforward.

The first step is to check the input and output files for problems, which is very basic but very important.

The sample files used in the code are in WAV format and there are resources in the libsonic repository to download, so you don’t have to search for them yourself.

Wav files can be read using wave.c and wave.h files in the repository.

The next step is to read the sample from the input file, pass it to libsonic, and finally write the processed sample to the output file.

There’s a concept here: video processing, we’re talking about frame by frame, there’s no frame for audio, it’s all about sampling points, how many sampling points to process and so on.

In the code example, the readFromWaveFile method reads the sample from a file, and the sonicWriteShortToStream method hands the sample to libsonic.

There is a sonicStream structure in libsonic that stores transformed data, which is processed according to the speed and volume Settings.

Finally, sonicReadShortFromStream reads the data from libsonic and writes it to the output file using the writeToWaveFile method.

After testing, the above method can achieve the effect of volume adjustment and speed change, while maintaining the original tone.

summary

In addition to waV files, PCM data can also be variable speed.

Pay attention to the public number audio and video development progress, in the follow-up will be updated source analysis and more content, please look forward to ~~~