• How It’s Made: I/O Photo Booth
  • Very Good Ventures Team
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: Hoarfroster
  • Proofreader: Chorer

We, the Very Good Ventures developers, have partnered with Google to bring this year’s Google I/O interactive experience: Photo Booth! You can now take photos with the famous Google mascots Flutter’s Dash, Android Jetpack, Chrome’s Dino and Firebase’s Sparky, and decorate your photos with stickers like party hats, pizza, funky glasses, and more! Finally, you can also share photos on social media, or choose to download photos to update your profile photos!

The Flutter Dash, Firebase Sparky, Android Jetpack and Chrome Dino

We built the I/O Photo Booth software using Flutter Web and Firebase because Flutter now provides support for Web applications, We thought this would be a great way to make the application easily accessible to attendees from all over the world who attended this year’s virtual Google I/O. Flutter’s web support removes the restriction that we have to install our apps from the app store. It also gives us the flexibility to run Flutter on a given device: mobile, desktop or tablet. This allows us to experience I/O Photo Booth by simply viewing the pages we present using any appropriate browser on any device without downloading!

Although I/O Photo Booth is designed to provide a Web experience, all code is written using a platform-independent architecture. While native support for elements such as camera plug-ins is available on their respective platforms, the same code applies to all platforms (desktop, web, and mobile).

Create a virtual photo booth using Flutter

Build the Flutter camera plug-in for the Web

The first challenge was to build a camera plug-in for Flutter on the Web. Initially, we contacted the Baseflow team because they maintain the existing open source Flutter camera plugin. While Baseflow is committed to building first-class camera plug-in support for iOS and Android, we are happy to use the joint plug-in. We stayed as close to the official plugin interface as possible so that we could merge it back into the official plugin when we were ready.

We identified two apis that are critical to building the I/O Photo Booth camera experience in Flutter.

  1. Initialize the camera: The application first needs to access the camera on our device. On a desktop, this might be a webcam, but on a mobile device, we opted for a front-facing camera. We also specified a resolution of 1080p to maximize camera quality depending on our device.
  2. Take photos: We used the built-in HtmlElementView. It can use platform views to render native Web elements as Flutter widgets. In this project, we rendered a VideoElement as a native HTML element — that’s what we saw on screen earlier when you took your photo. We also used CanvasElement as another HTML element rendering that allowed us to capture images from the media stream when the photo button was clicked.
Future<CameraImage> takePicture() async {  
 final videoWidth = videoElement.videoWidth;  
 final videoHeight = videoElement.videoHeight;  
 finalcanvas = html.CanvasElement( width: videoWidth, height: videoHeight, ); canvas.context2D .. translate(videoWidth,0)  
   ..scale(- 1.1)  
   ..drawImageScaled(videoElement, 0.0, videoWidth, videoHeight);  
 final blob = await canvas.toBlob();  
 return CameraImage(  
   data: html.Url.createObjectUrl(blob),  
   width: videoWidth,  
   height: videoHeight,  
 );  
}
Copy the code

Camera permissions

After we made the Flutter Camera plugin run on the Web, we created an abstraction to display a different UI based on Camera permissions. For example, we can display an illustrative message while waiting to choose whether to allow the browser camera, or if there are no available cameras to access.

Camera(  
 controller: _controller,  
 placeholder: (_) => const SizedBox(),  
 preview: (context, preview) => PhotoboothPreview(  
   preview: preview,  
   onSnapPressed: _onSnapPressed,  
 ),  
 error: (context, error) => PhotoboothError(error: error), 
)
Copy the code

In this abstraction, placeholder returns the initial UI (const SizedBox()) while the application waits for us to grant permissions to the camera. Preview grants permissions to return to the UI and provides a live video stream of the camera. The error builder allows us to catch errors as they occur and render the corresponding error message.

Mirror images

Our next challenge was mirror images. If we leave it as it is and use the camera to take photos, we will not see what we normally see when we look in the mirror, and some devices have open interfaces such as reverse buttons to handle this. So if you take a picture with a front-facing camera, when you take a picture, we’ll see the mirrored version.

In our first approach, we try to capture the default camera view and then apply a 180 degree transform around the Y-axis. This seemed to work, but then we ran into a problem where Flutter would occasionally overwrite transitions, causing the video to revert to an unmirrored version.

With the help of the Flutter team, we solved this problem by wrapping VideoElement in DivElement and updating the VideoElement element to fill the width and height of the DivElement. This allows us to apply the image to the video element without the need for the Flutter to override the transform effect because the parent element is a DIV. This method gives us the mirror camera view we need!

Non-mirrored view

Image view

Stick to a strict aspect ratio

Enforcing a strict 4:3 aspect ratio for large screens and a strict 3:4 aspect ratio for small screens is harder than it looks! Enforcing this ratio is important both to comply with the overall design of the Web application and to ensure that photos look pixel-perfect when shared on social media. This is a challenging task because the aspect ratio of the built-in cameras on the device varies widely.

To enforce a strict aspect ratio, the application first uses the JavaScript getUserMedia API. We use this API to provide the VideoElement stream that we see in the camera view (mirrored, of course). We also apply the Object-fit CSS attribute to ensure that the video element overrides its parent container. We use the built-in AspectRatio widget of the Flutter to set the AspectRatio. Therefore, the camera makes no assumptions about the aspect ratio displayed; It always returns the maximum resolution supported and then conforms to the constraints provided by the Flutter (4:3 or 3:4 in this case).

final orientation = MediaQuery.of(context).orientation;  
final aspectRatio = orientation == Orientation.portrait  
   ? PhotoboothAspectRatio.portrait  
   : PhotoboothAspectRatio.landscape;  
returnScaffold( body: _PhotoboothBackground( aspectRatio: aspectRatio, child: Camera( controller: _controller, placeholder: (_) = >const SizedBox(),  
     preview: (context, preview) => PhotoboothPreview(  
       preview: preview,  
       onSnapPressed: () => _onSnapPressed(  
         aspectRatio: aspectRatio,  
       ),  
     ),  
     error: (context, error) => PhotoboothError(error: error),  
   ),  
 ),  
);
Copy the code

Drag and drop to add friends and stickers

A big part of the I/O Photo Booth experience is taking pictures with our favorite Google friends and adding items. We can drag and drop friends and items in photos, and we can resize and rotate them until we get the image we like. It’s not hard to notice that when you add friends to the screen, you can drag them and resize them. Friends also have animations — we used Sprite graphics to achieve this effect.

for (final character in state.characters)  
 DraggableResizable(     
   canTransform: character.id == state.selectedAssetId,  
   onUpdate: (update) {  
     context.read<PhotoboothBloc>().add(  
       PhotoCharacterDragged(  
         character: character,   
         update: update,  
       ),  
     );  
   },  
   child: _AnimatedCharacter(name: character.asset.name),  
 ),
Copy the code

To resize the object, we create a draggable, resizable widget that can be wrapped around any Flutter widget, in this case, friends and props. This widget uses LayoutBuilder to handle zooming of the widget based on viewport constraints. In it, we use GestureDetectors to connect to the onScaleStart, onScaleUpdate, and onScaleEnd callbacks. These callbacks provide detailed information about the gestures we need to reflect the changes we make to friends and items.

The Transform widget and 4D matrix Transform will scale and rotate friends and props based on the various gestures we make, as reported by multiple GestureDetector.

Transform( alignment: Alignment.center, transform: Matrix4.identity() .. scale(scale) .. rotateZ(angle), child: _DraggablePoint(...) .)Copy the code

Finally, we created a separate package to determine if our device supports touch input. Draggable, resizable widgets adjust to touch. On devices with touch input, resizable anchor points and rotating ICONS are not visible because we can manipulate images directly by pinching and panting, whereas on devices without touch input, such as our desktop devices, anchor points and rotating ICONS are added to accommodate clicking and dragging.

Priority should be given to Flutter on the network

Web first development using Flutter

This is one of the first pure Web projects we built with Flutter, which has different characteristics from mobile applications.

We need to ensure that the application is responsive and adaptive to any browser on any device. That said, we had to make sure that I/O Photo Booth could scale according to browser size and handle mobile and Web input. We did this in several ways:

  • Responsive resizing: We should be able to resize the browser to the desired size, and the UI should respond accordingly. If our browser window is portrait, the camera will flip from a landscape view with a 4:3 aspect ratio to a portrait view with a 3:4 aspect ratio.
  • Responsive design: The desktop browser design shows Dash, Android Jetpack, Dino, and Sparky on the right, and on mobile devices at the top. The desktop design also uses the drawer to the right of the camera, and the BottomSheet class is used on the mobile end.
  • Adaptive input: If you access the I/O Photo Booth from the desktop, the mouse click is treated as an input device. If you’re using a tablet or phone, use touch as input. This is especially important when resizing stickers and placing them in photos. The mobile side supports pinching and shifting, and the desktop side supports click and drag.

Extensible architecture

We also used our own approach to build an extensible mobile application for this application. We started I/O Photo Booth with a strong foundation, including sound null-value security, internationalization, and 100% unit and widget test coverage from the first commit. We used Flutter_BLOC for state management because it allows easy testing of business logic and observing all state changes in the application. This is especially useful for developer logging and traceability, because we can see exactly what changes are between states and isolate problems more quickly.

We also implemented a function-driven Monorepo structure. For example, stickers, shares, and live camera previews are all implemented in their own folders, each containing its own UI components and business logic. These integrate with external dependencies, such as camera plug-ins located in the bun directory. This architecture allows our team to work on multiple functions in parallel without disrupting the work of others, minimizes merge conflicts, and enables us to effectively reuse code. For example, the UI component library is a separate package called Photobooth_UI, and the camera plug-in is separate.

By splitting components into separate packages, we can extract and open source individual components that are not relevant to this particular project. Even the UI component library package is open source for the Flutter community, similar to the Material and Cupertino component libraries.

Firebase + Flutter = Perfect match

Firebase authentication, storage, hosting, etc

Photo Booth leverages the Firebase ecosystem for various back-end integrations. The Firebase_auth package supports anonymous login of users immediately after application startup. Firebase authentication is used for each session to create an anonymous user with a unique ID.

When we open the shared page, Firebase will come into play. We can download photos to save as profile pictures or share them directly on social media. If we choose to download the photo, it will be stored locally on our device. If you choose to share a photo, the app uses the Firebase_storage package to store the photo in Firebase so we can retrieve it later to populate our social posts.

We defined Firebase security rules on the Firebase bucket to make photos immutable after creation. This prevents other users from modifying or deleting photos in the bucket. In addition, we used the object lifecycle management provided by Google Cloud to define rules for deleting all objects that were 30 days old, but we could also request that our photos be deleted as soon as possible following the instructions listed in the application.

This application also uses Firebase Hosting to quickly and securely host web applications. Action – Hosting -deploy GitHub Action allows us to automatically deploy to Firebase hosting based on the target branch. When we merge the changes into the main branch, this action triggers a workflow that builds the application’s development style and deploys it to Firebase hosting. Similarly, when we merge changes into the release branch, this action triggers a production deployment. The combination of GitHub Action and Firebase Hosting enabled our team to iterate quickly and always preview the latest version.

Finally, we used Firebase Performance Monitoring to monitor key Web Performance metrics.

Use Cloud Functions for networking

Before we can create our own social posts, we first need to make sure that every pixel of the photo looks perfect. The final image includes a beautiful frame in honor of I/O Photo Booth and is cropped to a 4:3 or 3:4 aspect ratio to look great on social posts.

We use OffscreenCanvas API or [CanvasElement] (developer.mozilla.org/ en – US/docs/Web/HTML/Element/canvas) as the polyfill Compose the original photo with layers containing our friends and props, and generate a single image that can be downloaded. The image_Compositor package handles this processing step.

We then leverage Firebase’s powerful Cloud Functions to assist in sharing photos to social media. When we click the Share button, we are taken to a new TAB on the platform of our choice that contains a pre-populated post. The post has a URL that redirects to the cloud function we wrote. When the browser parses the URL, it detects dynamic meta information generated by the cloud function. This information allows the browser to display a nice preview image of the Photo in our social posts as well as links to a shared page where our followers can view the Photo and navigate back to the I/O Photo Booth application to get their own Photo.

function renderSharePage(imageFileName: string, baseUrl: string): string {  
 const context = Object.assign({}, BaseHTMLContext, {  
   appUrl: baseUrl,  
   shareUrl: `${baseUrl}/share/${imageFileName}`,  
   shareImageUrl: bucketPathForFile(`${UPLOAD_PATH}/${imageFileName}`),  
 });  
 return renderTemplate(shareTmpl, context);  
}
Copy the code

The final product looks like this:

For more information on how to use Firebase in the Flutter project, check out this Codelab.

Effect of the finished product

This project is a good example of a network-first approach to building applications. We were pleasantly surprised by how similar our workflow for building this Web application was compared to our experience building mobile applications with Flutter. We have to consider things like viewport size, responsiveness, touch and mouse input, image load time, browser compatibility, and all the other considerations that come with building a Web. However, we still write Flutter code using the same schema, architecture, and coding standards. We feel at home building for the Web. The TOOLS and growing ecosystem of Flutter software packages, including the Firebase tools suite, make I/O Photo Booth possible.

A great Ventures team at I/O Photo Booth

We’ve open-source all the code. Check out the Photo_booth project on GitHub!

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.