Let’s see what it looks like

Full project: github.com/imnull/txt-…

The data composition of a pixel map

RGBA

Each point in the pixel map is composed of 4 values of RGBA:

  • R red
  • G green
  • B blue
  • AAlpha transparency channel

The RGB three colors correspond to a color luminous element on the display; By adjusting the brightness of these three points, the pixel will appear different colors.

Each color controls the luminance of the luminescent element by the size of the value, which is in the range of 0~255, that is, each color layer has 256 grayscale steps, so RGB three colors, can produce 256^3 colors, which is often called 16M true colors.

The other Alpha channel, which controls transparency, is used when multiple layers of images are synthesized.

Canvas drawing

A simple example:

const canvas = document.createElement('canvas')
canvas.width = 2
canvas.height = 2
// CanvasRenderingContext2D
const ctx = canvas.getContext('2d')
ctx.fillStyle = `,20,30,0.5 ` rgba (10)
ctx.fillRect(0.1.1.1)
Copy the code

In this example, we create a 2×2 canvas and draw a rectangle with the color rgba(10,20,30,0.5) at the point (0,1) with CTX; Since the rectangle is only 1×1 in size, it is actually drawing 1 pixel.

Get structured picture data

Using the CanvasRenderingContext2D instance (CTX above), we can obtain the structured ImageData ImageData:

const canvas = document.createElement('canvas')
canvas.width = 2
canvas.height = 2
// CanvasRenderingContext2D
const ctx = canvas.getContext('2d')
ctx.fillStyle = `,20,30,0.5 ` rgba (10)
ctx.fillRect(0.1.1.1)

const imageData = ctx.getImageData(0.0.2.2)
Copy the code

At this point, the main structure of imageData is:

{
  "width": 2."height": 2."data": [
    0.0.0.0./ / [0, 0]
    0.0.0.0./ / (1, 0]
    10.20.30.128./ / [0, 1]
    0.0.0.0  / / [1, 1)]}Copy the code

The canvas created here is 2×2 size, 4 pixels in total; Each pixel has four value elements, arranged from left to right and top to bottom. In this order, the points (0,1) just drawn should be the third set of data in the array; The rest is not drawn, so all values remain at their original value of 0.

Notice hereAThe value is still represented as0 ~ 255And thecssthergbaIn theais0 ~ 1After the drawing is completed, the data will be automatically converted.

Pixel traversal

Since data is a flat array, it can be very cumbersome to use, so let’s write a few traverse methods to traverse pixels in order to make them easier to use later.

Define the type of

/** RGBA color */
export type TColor = [number.number.number.number]

/** * Pixel information type */
export type TPixelInfo = {
    /** Current pixel point color value [R,G,B,A] */
    color: TColor
    /** Global index location */
    index: number
    /** x-coordinate */
    x: number
    /** ** /
    y: number
    /** Canvas width */
    width: number
    /** Canvas height */
    height: number
}
Copy the code

Coordinate conversion

/** * Create a pixel information object with coordinates *@param image 
 * @param x 
 * @param y 
 * @returns * /
export const createPixelInfo = (image: ImageData, x: number, y: number) = > {
    const { width, height, data } = image
    const pixelIdx = y * width + x
    const index = pixelIdx * 4
    const color: TColor = [
        data[index + 0], data[index + 1],
        data[index + 2], data[index + 3]]const info: TPixelInfo = {
        width,
        height,
        index,
        color,
        x,
        y
    }
    return info
}
Copy the code

Linear traversal

The sequential access of pixel by pixel is not very practical, so let’s warm up first:

/** * Linear traversal *@param ImageData imageData *@param Cb * /
export const traverseImageData = (imageData: ImageData, cb: (color: TPixelInfo) => void) = > {
    const { data, width } = imageData
    for(let i = 0; i < data.length; i += 4) {
        const idx = i / 4
        cb(createPixelInfo(imageData, idx % width, idx / width >> 0))}}Copy the code

Nonlinear traversal and sampling

Most image processing filters involve sampling, such as blur filters, zoom in, smoothing, and so on.

/** * Nonlinear sampling *@param ImageData imageData *@param Cb * callback@param * step step length@param SamplingOffset Sampling pixel offset */
export const traverseSampling = (
    imageData: ImageData,
    cb: (color: TPixelInfo) => void,
    step: number = 1,
    samplingOffset: number = 0
) = > {
    if(step < 1) {
        step = 1
    }
    const { data, width, height } = imageData
    const rows = height / step >> 0
    const columns = width / step >> 0
    for(let Y = 0; Y < rows; Y++) {
        for(let X = 0; X < columns; X++) {
            const info = createPixelInfo(imageData, X * step, Y * step)
            /** Sample offset */
            const sampOffset = step * samplingOffset >> 0
            /** Sample pixel location index */
            const sampPixelIdx = width * (info.y + sampOffset) + info.x + sampOffset
            /** Sample pixel array index */
            const sampIdx = sampPixelIdx * 4
            /** Re-assign the sample point color */
            info.color = [
                data[sampIdx],
                data[sampIdx + 1],
                data[sampIdx + 2],
                data[sampIdx + 3]
            ]
            cb(info)
        }
    }
}
Copy the code

Nonlinear traversal is based on coordinate system access, rather than through the array order access, more in line with human habits.

The principle of sampling is to reduce the sampling accuracy by setting the step size, and take the sampling base point at a certain interval and output it to the callback function. In this article, we do not do interpolation and other methods to improve accuracy.

In the sampling process, in the lower right direction of each sampling point, step length and width range, is a region that is skipped, and this region is called the sampling region.

The last parameter, samplingOffset, is the sampling pixel offset extending diagonally from the sampling base point to the sampling area, or the color value of the sample can deviate from the sampling base point and be set in the sampling area as required.

When samplingOffset is 0, the sampling color is the color value of the sampling base point, that is, the point in the upper left corner of the sampling area. If you need to take the color value of the center point of the sampling area, set samplingOffset to 0.5.

Coordinate block traversal

Iterates over pixels in a rectangular area of the image:

/** * Traverse the rectangle *@param ImageData structured imageData *@param Rect Rectangle area '[x, y, width, height]' *@param Cb * /
export const traverseArea = (
    imageData: ImageData,
    rect: [number.number.number.number],
    cb: (color: TPixelInfo, data: ImageData) => void
) = > {
    const { width, height } = imageData
    const [X, Y, W, H] = rect
    for(let y = Y; y < Math.min(Y + H, height); y++) {
        for(let x = X; x < Math.min(X + W, width); x++) {
            cb(createPixelInfo(imageData, x, y), imageData)
        }
    }
}
Copy the code

Pixel modification

Uint8ClampedArray imageData.data = Uint8ClampedArray

/** * modify based on the index of the array@param color 
 * @param index 
 * @param array 
 */
export const assignColor = (
    color: [number.number.number.number],
    index: number,
    array: Uint8ClampedArray
) = > {
    const [r, g, b, a] = color
    array[index + 0] = r
    array[index + 1] = g
    array[index + 2] = b
    array[index + 3] = a
}

/** * modify * based on coordinate system@param color 
 * @param x 
 * @param y 
 * @param image 
 * @returns * /
export const setColor = (
    color: [number.number.number.number],
    x: number,
    y: number,
    image: ImageData
) = > {
    const { width, height, data } = image
    if (y < 0 || y >= height || x < 0 || x >= width) {
        return
    }
    const pixelIdx = y * width + x
    const index = pixelIdx * 4
    assignColor(color, index, data)
}
Copy the code

Mosaic filter

Mosaic filter, in fact, is to reduce the sampling degree, and then expand the sampling point to the whole sampling area, to achieve the occlusion of the image processing.

Based on the capabilities provided by the above tools and the characteristics of the filter itself, do a requirement analysis:

  • Mosaic tile size can be set
  • Sampling offset can be set
  • Pipeline mode
  • Change the data in place

Tile cutting and traversal

/** Mosaic filter *@param Image Image data *@param Size Tile size *@param Offset Sampling offset *@returns * /
export const filterMosaic = (image: ImageData, size: number, offset: number = 0.5) = > {
    // If the tile size is less than 2, do not handle
    if(size < 2) {
        return image
    }
    // Iterate and sample by tile size
    traverseSampling(image, ({ x, y, color }) = > {
        // Based on the sampling points, traverse the pixel points in the tile area
        traverseArea(image, [x, y, size, size], ({ x, y }, image) = > {
            // Set all pixels in the tile area to the sample point color
            setColor(color, x, y, image)
        })
    }, size, offset)
    return image
}
Copy the code

Filter factory

Curry the size and offset arguments into a higher-order function:

/** Mosaic filter Factory *@param Size Tile size *@param Offset Sampling offset *@returns * /
export const createMosaicFilter = (size: number, offset: number = 0.5) = > (image: ImageData) = > {
    return filterMosaic(image, size, offset)
}
Copy the code

Page implementation

// some imports
export default (props) => {
    
    // Output the Mosaic filter canvas
    const canvas = createRef()
    // Tile size
    const [size, setSize] = useState(8)
    // Sample offset
    const [offset, setOffset] = useState(0.5)
    // The original image object (DOM)
    const [img, setImg] = useState(null)
    
    // putImageData output data after filter processing
    const drawMosaic = (imageData, canvas) = > {
        imageData = createMosaicFilter(size, offset)(imageData)
        canvas.width = imageData.width
        canvas.height = imageData.height
        const ctx = canvas.getContext('2d')
        ctx.putImageData(imageData, 0.0)}// some event handles

    useEffect(() = > {
        if(! img || ! canvas.current) {return
        }
        const data = getImageData(img)
        drawMosaic(data, canvas.current)
    }, [img, size, offset])
    

    return (
        <div className="mosaic-container">
           {/* some jsx */}
            <div className="mosaic-row">
                <img className="mosaic-origin" src={`./images/logo.jpg`} onLoad={handleImageLoad} />
                <canvas className="mosaic-canvas" ref={canvas} />
            </div>
        </div>)}Copy the code

Tile size and sampling offset can be adjusted by sliding rod.

Character painting

As mentioned in the beginning, in RGBA color structure, each color is represented as an array of 0~255, that is, 256 layers of gradient; If you take one layer out, you can actually only form a gray scale diagram with no color.

The character painting we often see actually uses a character to represent the gray scale of a pixel, and realizes the overall look and feel of a picture, which is similar to impressionist paintings represented by Monet: the farther away the better the look and feel.

The grayscale value of the character

For example, to determine the grayscale of the capital letter A, create A 120×120 canvas and draw A black letter A on it. In this case, the letter A covers the range where the opacity Alpha value is non-zero. The number of pixels with Alpha non-zero is then counted and divided by the total number of pixels to get the gray scale value of the letter. Taking a set of characters and deweighting them according to the gray-scale value gives you a gray-scale character table to keep aside:

/** * Get data grey scale *@param data 
 * @returns * /
const getRate = (data: Uint8ClampedArray) = > {
    let c = 0
    for (let i = 0; i < data.length; i += 4) {
        const val = data[i + 3]
        if (val > 0) {
            c += 1}}const C = data.length / 4
    const rate = c / C
    return rate
}

/ * * * / cache
constCACHE: { FULL_CHAR_MAP? : {letter: string.value: number} []} = {}/** * Grey scale character table *@param FontFamily font *@param The size size *@returns * /
export const getFullCharMap = (fontFamily = 'monospace', size = 240) = > {
    if(! CACHE.FULL_CHAR_MAP) {const canvas = document.createElement('canvas')
        canvas.width = size
        canvas.height = size
        const ctx = canvas.getContext('2d')
        const rateMap: { letter: string.value: number} [] = []for (let i = 32; i < 128; i++) {
            const letter = String.fromCharCode(i)
            const { data } = drawLetterData(ctx, letter, size, fontFamily)
            const rate = getRate(data)
            const val = Math.round(rate * 256)
            const item = rateMap.find(({ value }) = > value === val)
            if(! item) { rateMap.push({ letter,value: val })
            }
        }
        CACHE.FULL_CHAR_MAP = rateMap
            // .filter(({ value }) => value > 0)
            .sort((b, a) = > a.value - b.value)
    }
    return CACHE.FULL_CHAR_MAP
}
Copy the code

The sampling

A character can be seen as a tile, and the character is taken as a pixel point, which is similar to Mosaic, requiring reduced sampling accuracy.

At the sampling point, it is not necessary to take all the color layers. In this paper, the green layer is taken as the grey scale value basis, and then the corresponding character is found in the grey scale character label.

The type definition

export type TMosacPoint = [number.number, TColor]
export type TMosacData = {
    width: number.height: number.size: number.points: TMosacPoint[],
    columns: number.rows: number,}Copy the code

Collection sampling point

This is essentially defining all the tiles, collecting the tile reference color, and returning some information for use.

/** * Collect sampling point *@param image 
 * @param size 
 * @param offset 
 * @returns * /
export const getMosaicData = (image: ImageData, size: number.offset: number = 0.5) :TMosacData= > {
    const { width, height } = image
    const rows = height / size >> 0
    const columns = width / size >> 0
    const points: TMosacPoint[] = []
    traverseSampling(image, ({ x, y, color }) = > {
        points.push([x, y, color])
    }, size, offset)
    return { points, width, height, size, columns, rows }
}
Copy the code

Single sampling to character

/** * Sample to character (Green layer) *@param point 
 * @param charMap 
 * @returns * /
export const samp2char_g = (
    point: TMosacPoint, 
    charMap: { letter: string, value: number} []) = > {
    / / get green
    const [, g] = point[2].map(v= > v / 256)
    // Convert the index from green grayscale to character grayscale
    const charIndex = g * charMap.length >> 0
    const char = charMap[charIndex]
    return char.letter
}
Copy the code

Cut the sampling set to be a multi-line string

export const mosaic2lines = (
    mosaicData: TMosacData,
    charMap: { letter: string.value: number }[] = getFullCharMap()
): string[] = > {const chars = mosaicData.points.map(point= > samp2char_g(point, charMap))
    const lines: string[] = []
    while(chars.length > 0) {
        lines.push(chars.splice(0, mosaicData.columns).join(' '))}console.log(lines.join('\n'))
    return lines
}
Copy the code

The console can see something interesting now

Line style adjustment

Overall high, in the display, you need to reduce the line space, open the character space.

.impress-line {
    margin: 0;
    line-height: 0.7 em;
    font-family: monospace;
    font-size: 12px;
    color: # 000;
    letter-spacing: 1px;
}
Copy the code

Be sure to use constant width fonts!

Page implementation

// some imports
export default (props) => {
    // If the character drawing is too large, use transform:scale to adjust the overall size of the character drawing.
    // Because the size cannot be smaller than 11px.
    const [scale, setScale] = useState(1)
    // Tile size
    const [size, setSize] = useState(6)
    // Picture structured data
    // Cache it directly because it is not in place
    const [imageData, setImageData] = useState(null)
    // Row data for character drawing
    const [lines, setLines] = useState([])

    const drawImpress = (imageData) = > {
        // Collect the sample points
        const mosaicData = getMosaicData(imageData, size)
        // Generate the lines drawn by characters
        const lines = mosaic2lines(mosaicData)
        // Calculate the character drawing width
        const mayWidth = mosaicData.columns * 8
        // Calculate the scale based on the character width and the image width
        if(mayWidth > imageData.width) {
            const scale = imageData.width / mayWidth
            setScale(scale)
        } else {
            setScale(imageData.width / mayWidth)
        }
        // Configure the character graph data
        setLines(lines)
    }

    // some event handles

    useEffect(() = > {
        if(! imageData) {return
        }
        drawImpress(imageData)
    }, [imageData, size])

    

    return (
        <div className="impress-container">
            {* some jsx *}
            <div className="impress-row">
                <img className="impress-origin" src={`./images/logo.jpg`}
                    onLoad={handleImageLoad} />
                <div className="impress-lines-container"
                    style={{ transform: `scale(${scale`}}}) >{
                    lines.map((line, idx) => 
                        <pre className="impress-line" key={idx}>{line}</pre>)}</div>
            </div>
        </div>)}Copy the code

The output

It’s kind of cute.

video

The key to the video implementation detail is to use a Canvas bridge: first draw the video to the Cavans, then take the data from the Canvas and make a character drawing display.

In addition, because the video is dynamic, the bridge canvas needs to be constantly updated.

Frame cycle factory

export const animationFactory = <T>(cb: (opts? : T) =>void, opts? : T) = > {
    let stop = false, handle = 0
    const loop = () = > {
        cancelAnimationFrame(handle)
        if(stop) {
            return
        }
        cb(opts)
        handle = requestAnimationFrame(loop)
    }

    return {
        start() {
            stop = false
            requestAnimationFrame(loop)
        },
        stop() {
            stop = true
            cancelAnimationFrame(handle)
        }
    }
}
Copy the code

Bridge Canvas

Sizing the canvas

When the video’s loadedMetaData event is triggered, we can get the size of the video:

// Animation frame factory
const frameDraw = (ctx, video, setImageData) = > () = > {
    ctx.drawImage(video, 0.0)
    const imageData = ctx.getImageData(0.0, ctx.canvas.width, ctx.canvas.height)
    setImageData(imageData)
}

let eng = null
const handleLoadedMetaData = (e) = > {
    const { target } = e
    const { current: cvs } = canvas
    cvs.width = target.videoWidth
    cvs.height = target.videoHeight
    const ctx = cvs.getContext('2d')
    if(! eng) {// Create the animation frame
        const frame = frameDraw(ctx, video.current, setImageData)
        // Create the animation engine
        eng = animationFactory(frame)
        // Start the animation engine and start updating the bridge Canvas
        eng.start()
    }
}
Copy the code

Canvas to character painting

Is there any change in the character drawing

const drawImpress = (imageData) = > {
    / / collection
    const mosaicData = getMosaicData(imageData, size)
    // Composite characters
    const lines = mosaic2lines(mosaicData)
    // Monitor width
    const mayWidth = mosaicData.columns * 8
    / /
    if(mayWidth > imageData.width) {
        const scale = imageData.width / mayWidth
        setScale(scale)
    } else {
        setScale(imageData.width / mayWidth)
    }
    / / rendering
    setLines(lines)
}
Copy the code

Page implementation

// some imports

const frameDraw = (ctx, video, setImageData) = > () = > {
    ctx.drawImage(video, 0.0)
    const imageData = ctx.getImageData(0.0, ctx.canvas.width, ctx.canvas.height)
    setImageData(imageData)
}

export default (props) => {
    
    // Video player
    const video = createRef()
    / / bridge canvas
    const canvas = createRef()
    
    // Character drawing overall size fit
    const [scale, setScale] = useState(1)
    // Tile size
    const [size, setSize] = useState(8)
    // Picture data
    const [imageData, setImageData] = useState(null)
    // Character graph data
    const [lines, setLines] = useState([])
    // Loop frame engine
    let eng = null

    const handleLoadedMetaData = (e) = > {
        // ...
    }

    const drawImpress = (imageData) = > {
        // ...
    }

    // some event handles

    useEffect(() = > {
        if(! imageData) {return
        }
        drawImpress(imageData)
    }, [imageData, size])

    useEffect(() = > () = > {
        if(eng) {
            eng.stop()
        }
    }, [])
    return (
        <div className="video-container">
            {* some jsx *}
            <div className="video-row">
                <video
                    ref={video}
                    controls
                    autoPlay
                    loop
                    className="video-origin"
                    src="./images/cat.mp4"
                    onLoadedMetadata={handleLoadedMetaData}
                />
                <canvas ref={canvas}
                    style={{ backgroundColor: '#eee', display: 'none' }} />
                <div className="video-lines-container"
                    style={{ transform: `scale(${scale`}}}) >{
                    lines.map((line, idx) => 
                        <pre className="video-line" key={idx}>{line}</pre>)}</div>
            </div>
        </div>)}Copy the code

The above.