This article introduces the main usage of sharp, a common image processing package in Node. The author of this article came into contact with this package while developing screeps-world-printer. In the process of deepening, there are also many pits and places needing attention, which will be pointed out in this paper.

Here are some common links:

  • sharp – npm
  • Sharp – website
  • Sharp-github Issue Area: This is definitely a place to come a lot if you want to use it in depth

The installation

There is nothing to say about installation, just execute the following command:

npm install sharp
# or
yarn add sharp
Copy the code

Note that Sharp is a libvips wrapper and will download libvips when installing, so it is best to configure the image source in advance.

Create an image

Sharp creates images using three common methods: direct generation, reading from a file, and Buffer. Let’s look at them one by one:

Create directly: This is a new image. Pass sharp an object containing the create object, which contains the configuration of the new image. The final background property can be a more detailed object, as shown here.

const sharp = require('sharp');

const image = sharp({
    create: {
        / / width
        width: 300./ / height
        height: 200./ / channel number
        channels: 4.// Specify the background color
        background: 'black' // or '#000000' or more detailed RGBA configuration objects}});Copy the code

Reading from a file: This is simpler, and provides a path that Sharp will go to to find the image.

const image = sharp('./origin.png');
Copy the code

Read from Buffer: Pass in a Buffer that Sharp instantiates. This is the most common way to create images, for reasons discussed below.

# Sharp can parse SVG, which is usefulconst buffer = Buffer.from(
    '<svg xmlns="http://www.w3.org/2000/svg" width="150" height="150"></svg>'
)
const image = sharp(buffer)
Copy the code

This method is also suitable for reading images from web requests:

// Use axios to request a network image
const axios = require('axios');

axios.get('http://xxxxx/xx/origin.png', {
    responseType: 'arraybuffer'
}).then(resp= > {
    const iamge = sharp(resp.data)
})
Copy the code

Save the image

There are two ways to save pictures. One is to save them directly as pictures:

const buffer = Buffer.from(
    '<svg xmlns="http://www.w3.org/2000/svg" width="150" height="150"></svg>'
)

sharp(buffer)
    .png()
    .toFile('./result.png')
    .then(() = > console.log('Save done'))
Copy the code

The second option is to convert it to a buffer and do whatever you want with it:

sharp('./origin.png')
    .png()
    .toBuffer()
    .then(buffer= > {
        console.log('buffer', buffer)
    })
Copy the code

Two things to note here:

  • toBuffertoFileBoth are asynchronous methods.
  • If the image being instantiated is unformatted, call the method to set its file type first.

For example, if this error pops up when you save:

Error: Unsupported output format ./result.png
Copy the code

This means that the image you want to save is unformatted. You need to use.png,.jpeg, etc. See here.

Modify the image

Sharp supports image modification operations including Resize and a host of other processing methods. This also has no what good say, have engaged in the image processing to see the name know is engaged in what. Note that the methods here are synchronous and support chained calls, so you can write:

const sharp = require('sharp');

const buffer = Buffer.from(` 
       
        
        
       `)

sharp(buffer)
    .flip()
    .rotate(45)
    .resize(500)
    .toFile('./result.png')
    .then(() = > console.log('Save done'));
Copy the code

Stitching images

The composite method is used for stitching.

This method takes an array, each element represents an array to splice the pictures on the figure, the input attribute represents the image ontology, blend represent splicing method, the left on behalf of the spliced into the position of the distance to the left of the reproduction (pixels) width, top representative spliced into the location of the distance from the top of the reproduction (pixels) width:

const sharp = require('sharp');

const run = async function () {
    // Black rectangle base of 100 * 300
    const background = await sharp({ create: {
        height: 100.width: 300.channels: 4.background: 'black'
    }}).png();

    // 100 * 100 blue tiles
    const blueTile = await sharp({ create: {
        height: 100.width: 100.channels: 4.background: 'blue'
    }}).png().toBuffer();

    background
        .composite([{ input: blueTile, blend: 'atop'.left: 0.top: 0 }])
        .toFile('./testResult.png');
}

run();
Copy the code

It then generates:

Atop is probably the most commonly used value in the example above. It simply stacks one diagram on top of another. Other supported attributes are shown here, and detailed descriptions can be found here.

Gravity, which is the offset configuration of the top and left maps, is also used in sharp’s ts statement.


Note here: The composite method cannot receive an input property that is a Sharp instance! For example, IN the above example, I used Sharp to create the tiles to be spliced, but I needed to use the toBuffer method to convert it into a buffer before passing it into composite for splicing. The following is the introduction in the document:

As you can see, you can either pass in a buffer or path string, or create a new image with create, but you can’t use sharp.


Composite also has a big problem: it doesn’t support chain calls! If we chain it, try to fill the background with three blocks of blue:

background
    .composite([{ input: blueTile, blend: 'atop'.left: 0.top: 0 }])
    .composite([{ input: blueTile, blend: 'atop'.left: 100.top: 0 }]) // <= specifies a right offset
    .composite([{ input: blueTile, blend: 'atop'.left: 200.top: 0 }])
    .toFile('./testResult.png');
Copy the code

In the expectation, the end result should be all blue. But the result looks like this:

Only the last composite takes effect. There are two ways to solve this problem:

The first option is to place all the tiles in the same array received by the composite instead of using the chain call:

background
    .composite([
        { input: blueTile, blend: 'atop'.left: 0.top: 0 },
        { input: blueTile, blend: 'atop'.left: 100.top: 0 },
        { input: blueTile, blend: 'atop'.left: 200.top: 0 }
    ])
    .toFile('./testResult.png');
Copy the code

The biggest problem with this is that you need to load all the images you want to concatenate into memory at the same time, which may not be very friendly to a small memory machine when concatenating gigapixel images. And composite is a synchronous operation, so be careful not to let it stall the process.

The second option is to re-instantiate toBuffer after each composite. You can find an example here. How to say, I feel it is still very twisted to write, the issue area has some opinions on why composite does not support chain, but it has not changed so far (2021-09-30).

If you need to work with a large image, my advice is to use the first method, start with a small number of concurrent lines and save them locally. Once all the rows are joined together then pass the save path to the big base map and put all the rows together.


Not only does the composite chain call have a problem, but it also has a problem with the composite and other operations chain calls. For example, in the following example, the blue block is expected to be scaled synchronously, but the result is the same as the composite chain call:

For example, we can concatenate, zoom and then reverse left and right:

background
    .composite([{ input: blueTile, blend: 'atop'.left: 0.top: 0 }])
    .resize(500)
    .flop(true) // <= left and right
    .toFile('./testResult.png');
Copy the code

Then you’ll see why WTF’s reverse operation didn’t work:

In this case, I would recommend the second solution. As follows, each operation is converted to a buffer (or snapshot according to Dev) :

let buffer = await background
    .composite([{ input: blueTile, blend: 'atop'.left: 0.top: 0 }])
    .toBuffer();

buffer = await sharp(buffer)
    .composite([{ input: blueTile, blend: 'atop'.left: 0.top: 0 }])
    .resize(500)
    .toBuffer();

sharp(buffer)
    .flop(true)
    .toFile('./testResult.png');
Copy the code

The result was exactly what we expected:

Modify transparency

There is often a need to adjust the overall transparency of an image, but sharp has pulled the gap, and we can see it in the channel operation of the API document:

RemvoeAlpha kills transparent channels directly, ensureAlpha replaces transparent channels if they don’t exist. There is no API for directly adjusting transparency.

Here is the solution, from the official issue area, it took a long time to find, is also very clever:

Since sharp.ensurealpha will only add transparent channels to images that have no transparent channels, this method cannot be used to adjust the transparency of an image that already has transparent channels@see https://github.com/lovell/sharp/issues/618#issuecomment-532293211
 *
 * @param Sharp to add images *@param 0-255 is completely opaque *@returns Add transparency to the image */
export const addOpacity = function (sharp: Sharp, opacity = 128) :Sharp {
    return sharp.composite([{
        input: Buffer.from([255.255.255, opacity]),
        raw: { width: 1.height: 1.channels: 4 },
        tile: true.blend: 'dest-in'
    }]);
};
Copy the code

A suspicious NPM ERR problem

When I write code also encountered a following error:

npm ERR! code ELIFECYCLE npm ERR! errno 3221225725 npm ERR! [email protected] case: 'node test.js' NPM ERR! Exit status 3221225725 npm ERR! npm ERR! Failed at the [email protected] case script.npm ERR! This is probably not a problem with npm. There is likely additional logging output above. npm ERR! A complete log of this run can be found in: npm ERR! C:\Users\a\AppData\Roaming\npm-cache_logs\2021-08-30T12_29_35_503Z-debug.logCopy the code

This error is reported when NPM run is used on a Windows machine. For details, see here.

Modify pixel cap

When creating a Sharp instance or composite, you encounter the following problems:

Error: Input image exceeds pixel limit
Copy the code

The image you want to read is so large that it exceeds the default limit of pixels. You can set the upper limit of pixels by specifying limitInputPixels as follows:

/ / when new
sharp('./bigImage.png', { limitInputPixels: 1000000000 }).toFile('result.png');

/ / joining together
sharp('./origin.png')
    .composite([{ input: 'bigImage.png'.limitInputPixels: 1000000000 }])
    .toFile('result.png')
Copy the code

Write in the last

That’s pretty much the end of this article. We met a lot of problems, but we finally achieved our goals. The functions I use are mainly focused on zooming and concatenating, and I use less for image and color manipulation.