Without further ado, first put the picture:
Github address: github; If you think it’s funny, please click on the star. Thank you.
StringAnmt
Do not rely on appearance level to eat the program ape, live knock code can also show face! And it’s got to be a little bit of a B! (Actually, the live-streaming feature was an afterthought, originally intended to do something “meaningful” with the gadget.)
Just a quick explanation of how it works
For the new API: mediaDevices getUserMedia open your camera, then your video image into a string of animation, output to the canvas, yes, it’s as simple as that.
-
Open the camera, read the camera stream information, and output it to
/* Get the video element */ var vdo=document.getElementsByTagName('video') [0]; /* Device start parameter */ var opt={ /* Turn on the microphone */ audio:true./* Start the camera */ video:{ height:600.width:800}}// Call system devices (microphone, camera) navigator.mediaDevices .getUserMedia(opt) .then(function(mediaStream){ // Get the output stream vdo.srcObject = mediaStream; // Video ready callback vdo.onloadedmetadata = function(e) { vdo.play(); } }) .catch(function(err){ /*error handle*/ })Copy the code
-
To obtain the pixel information of each frame of the video: here we need canvas students. We use canvas drawImage interface to input each frame of the video to canvas, and then use getImageData to obtain pixel information, process the image into gray, and fill the text according to the gray scale
var cvs=document.getElementByTagName('canvas') [0]; var ctx=cvs.getContext('2d'); var fontSize='10'; ctx.font="0px Arial".replace(0,fontSize) ctx.drawImage(vdo,0.0.800.600); var fm=getImageData(0.0.800.600); var data=fm.data; var str=' '; for(var j=0; j<60; j++){ str=' '; for(var i=0; i<80; i++){/* Get the location of the pixel information, sampling */ var index=(j*800+i)*fontSize; index*=4; Gray =r*0.299+g*0.587+b*0.114 */ var gray = data[index] * 0.299 + data[index + 1] * 0.587 + data[index + 2] * 0.114; str+=addText(gray); } ctx.fillText(str,0,j*fontSize,800); } function addText(gray){ /* Text direction: left to right represents black to white */ var text="To live and die for our country."; var d = parseInt(256 / text.length); var i = parseInt(gray / d); /* Prevent overflow */ if (i > text.length - 1) { i = text.length - 1; } return text[i]; } `Copy the code
So that’s what you do with each frame; We then use requestAnimationFrame to get the video information on a regular basis and then draw the output. Looks like… It’s like that’s the core, there’s nothing else to talk about… It’s all math, not code.
Just a quick word about how I use this library
-
Quick use:
<canvas id="cvs"></canvas> <video id="vdo" style="display:none"></video> <script src="StringAnmt.js"></script> <script> var StrAnmt = new StringAnmt({ videoId: 'vdo'.canvasId: 'cvs'.text: [' '.'le'.'water'.'Simon'].fontSize: '18' }); StrAnmt.openCamera( window.screen.width, window.screen.height, false ) </script>Copy the code
-
StringAnmt Parameter Description
-
videoId
:video
The label id; -
canvasId
:canvas
The label id; -
text
: The string or array that you want to render into animation should be black and white from left to right. It is not recommended to mix letters, numbers and Chinese characters. Characters with similar width should be used as far as possible to avoid output distortion. The more characters you use, the more detail (color blocks) you get. -
fontSize
: Character size, string type; -
color
: Output character color
-
-
StringAnmt method:
-
openCamera(width,height,isAudio)
: Enable the camera-
width
: Camera width -
height
: Camera height -
isAudio
: Indicates whether to enable the microphone
-
-
play()
: play -
pause()
Suspended: -
playAndPause()
: Pause if it is in play state, and play if it is in pause state (can be used for screenshots)
-
Write in the last
Yes, this is valentine’s Day, a person is very bored when masturbation, do not know what to do after masturbation, it is just fun to watch, but now it looks very simple, the whole article feels like nothing to say. But later found that a little change can also be made into a small head picture of those anchors, you can install the force used… And a little irritating is that mediaDevices. GetUserMedia in Google browser, if your server is not based on the HTTPS protocol, it can prevent you open the camera, hiyah spirit ah, LAN server are set up, the websocket protocol also done… Just can’t turn on the camera… So webRTC seems a little complicated… Anyone with a little common sense will know what I’m trying to do. (Funny)