Many people seem to be worried that AI will take our jobs, or even put us out of work. I see the possibility of a more optimistic future, which is that in the machine age we should pay more attention to how machine learning can help us to get things done easily.


People in the machine age are getting more and more involved in creative activities. The variety of tools, platforms, and devices available for design is increasing, and their costs are decreasing. With these design tools, you can make your own movie, record an album, design a city or print your own flower pot, all of which you can easily do from your home computer or even your mobile phone. You’ll want to experience the thrill of creative freedom.

While design tools have become cheaper and more accessible, that doesn’t mean it’s any easier to create a high-quality image or tell a compelling story. Making something original or beautiful still requires expertise, practice and experience.

Design tools and programming languages give us a lot of power, and we can use them to do things that are truly our own, but only if we know how to use them, or they won’t be our power.

 

Design tools + machine learning

I have a suggestion that we can use machine learning to help simplify design tools without limiting their expressiveness or depriving designers of their creativity. But it seems to run counter to what we know, that when we think of machine learning or artificial intelligence, automation is the first thing we think of.

There’s no getting around the fact that original design takes a lot of decision making and time. As a result, design tools go to extremes

One is to take a “one size fits all” approach, often found in consumer-level design tools, which simplify the design process by forcing the user into a small number of preset templates.

Another category is the “all in one” approach, often used as a professional design tool, which provides a lot of primary functions and is highly interesting to learn, but is often used in a way that is not consistent with the user’s way of thinking.

At first, machine learning seemed to offer a slightly more sophisticated version of the “one-size-fits-all” approach, which simplifies the design process by shifting the decision-making tasks away from the designer. I accept that machine learning can be used in this way, especially in the early days, but it also offers a richer range of possibilities. While we can’t really change the number of decisions involved in the design process, we can change what those decisions involve.

I want to talk a little bit about some of the machine learning ways that we can change the way we operate on design software and use it to make decisions. These methods are: feature discovery, exploratory design, descriptive design, process organization, and dialogue systems.

I think these have the potential to help simplify the design process without compromising the creativity of designers. Perhaps even more exciting, these methods allow designers to focus their full attention on the design itself, rather than learning how to implement their ideas through these design tools. In other words, the designer will control the tools, not be constrained by them.

 

Found characteristics

When designers sit down to design, maybe they have an exact description of the final product, maybe they don’t. In either case, they need to find a way to transform the blank canvas into the final product through a series of manipulations. This reminds me of a famous saying:

“Inside every stone is a statue, and the sculptor’s job is to find it.” — Michelangelo

I like this quote because it compares the art and design process to an exploration.

A block of marble has its boundaries, and within those boundaries, an infinite number of possible sculptures exist at the same time, and the artist’s job is akin to discovering a needle in the sea, which is a combination of features that can satisfy a specific set of requirements. It’s similar to a chemist looking for a new molecule, or a cook looking for a new flavor.

For each of these problems, the Spaces they explore may be completely different, but the process is somewhat similar, as each design problem is related to a specific set of interrelated attributes and constraints.

For example

Let’s take a look at some factors you might want to consider when designing household items such as wine glasses. If we want to make the glass taller, we may need to widen the base to prevent it from tipping over. In this case, you need to change two properties for this one constraint.

When we first encounter this problem, we may not realize its inherent constraints: “At what scale does this property cause the glass to tip?”

We gain expertise by conducting experiments and learning about the relationship between two properties and an initially unknown set of constraints that the physical world imposes on us. Let’s think of this exploration space as a very large graph in which each possible final state is represented by a unique set of coordinates.

In this graph, each feature of the software acts as a path to help us explore parts of the content in a particular direction. A rudimentary feature in a professional design tool is like a trail because it only moves a short distance on the map.

If we want to make the glass bigger, we can do it with a series of basic commands, or we can refine these commands into an advanced feature, like those in consumer design tools, that is more like a highway. Its advantage is that it can help us more with relatively few steps. The problem, however, is that motorways only have exits in frequented areas, so if a driver wants to go to an area that is not well known, he has to drive on the back roads, and there are many more steps required.

At the same time, most consumer design tools don’t even provide us with a path through the design tool’s primary functionality. There may be another goal not far from the exit, close to the designated goal, which is more in line with our expectations, but we have no way of reaching it, and we may not even know how the alternative to this goal will affect our overall design goal.

Therefore, advanced tools, while having great advantages in design efficiency, can also reduce the expressiveness of a design or its ability to transform in space. Many users will choose to stop at easier targets, leaving large areas of the map unexplored. A user with a clear goal can find a very circuitous route to the final destination. However, if the exact attributes of the destination do not appear in the user’s mind in advance, then he may not have the chance to get there.

As a result, advanced tools may make a small area of the map completely inaccessible, fragging the exploration of space and leaving even larger areas of the map inaccessible to us. In this sense, consumer-grade design tools do not extend the reach of human beings; rather, they hinder our creativity.

If we want to keep our creativity free, we either need to stick with the primary features or build a lot of advanced features that have a wider range of possibilities, but at the expense of tool simplicity to some extent.

Ideally, these advanced features would build us a new highway every time we set off, so we could easily get to our destination with very few operations, but that’s not possible in a pre-set advanced feature set.

Machine learning enables us to infer vast amounts of information about users and what they want to achieve by observing their behavior. It is far more practical to create a learning tool to learn how a designer operates software than to try to predict the designer’s needs from a pre-defined set of advanced features. In the past few years, a type of machine learning system called “recurrent neural networks” has proven to be particularly good at learning sequential patterns, and these systems have been widely used to predict things like the next field of a text or note.

Rather than building a design tool around a predefined set of advanced functions, we can use this recursive neural network to discover common sequences of primary functions in the tool and dynamically synthesize specific functions that are relevant to the designer’s current activities. User behavior patterns used in automatic generation of custom advanced functions are not limited to being extracted from a single designer’s behavior, but can also be extracted from the behavior of multiple designers. A recommendation system, similar to one that recommends music or movies based on a user’s preferences, can find a pattern among many designers and recommend relevant features to them based on their habits in the workflow.

Tool makers can leverage recommendation systems to better address the diversity of designers and the ways they digest information, make decisions, and interact with software. It will allow tool makers to better meet the needs of designers, rather than simply providing a single, pre-set static function or workflow that designers can then adapt to. By extrapolating the behavior patterns of a large number of users, tool makers can better understand the implicit relationships between the functions provided in the system. This will give tool makers more important advice on how to improve their software. With this approach, the toolmaker’s role shifts from overall management of advanced functionality to creating more detailed interface elements. From preset system rules and interface elements to intelligent generation, this means tool makers give up control over certain aspects of software. However, doing so can help designers accomplish tasks that the manufacturer did not expect.

 

Exploratory design

Everyone has an innate sense of aesthetics and design, of feeling pleasurable or useful. Because of our lack of experience, many of us lack the means to apply these intuitions to the design world for creative output. Design tools should not only help us implement design in known areas, but also help us build expertise in new design areas. If we took a random group of people on the street, gave them a blank sheet of paper and asked them to design their ideal living room, many would not know where to start.

But if they go to Pinterest and are asked to design their living room by picking the elements they like, many find it a lot easier. This “what you see is what you know” feeling can be a powerful motivator to get users involved in the design. Users do not need to understand how the tool works behind the scenes. Instead, when they see it in action, they can decide whether they like it or not.

Previously, we conducted a visualization study on two-dimensional spatial search. Although limited in scope, this visual approach provides an intuitive and detailed mechanism for design. Users simply select a location to search for any possible design associated with it.

This visual organization also allows users to create a clear mental model of a particular effect within a given search space. Of course, real-world design problems rarely consist of just two dimensions. Can take advantage of the low dimensional design of graph to show the characteristics of high dimension (note: the “dimension” is the role of machine learning system can reduce the time complexity and space complexity, save unnecessary feature extracting of overhead, when to interpret the characteristics of the data can have less, we can better explain the data, will help us to extract knowledge. Dimensionality reduction is convenient for data visualization.

Video player

Bj.bcebos.com/bduxc/F/152…

Media error: Format(s) not supported or source(s) not found

Download File: http://bj.bcebos.com/bduxc/F/1524808593.mp4?_=1

00:00
00:00
00:00
Use the up and down arrow keys to raise or lower the volume.

In the animation above, I input a set of images of leaf Outlines into the dimension reduction system. As the training unfolds, the algorithm reconfigures the position of each leaf in the two-dimensional map, and similar leaves are arranged in adjacent positions. Ultimately, this process creates a continuous two-dimensional map of the range of changes that can occur in the leaf.

Once the training is complete, the system can reconstruct images associated with any two-dimensional coordinates in the atlas. After the training, the images corresponding to the coordinate points in the system can be re-edited, but it is worth noting that this training can be done without providing a sorted sample, thus providing a simple and fast way for various innovative designs. We are free to look at any location in the map and explore how the leaf might evolve in any dimension, but only if it conforms to the concept of the leaf. We can freely view each image in the map that corresponds to the coordinates one by one, and can switch between them.

If you have a mature idea of what you want to design, you can approach it in a specific way. But if we want to explore further and see what else is possible, the “dimensionality reduction” approach above can help us step away from the solution and try something new.

Alternatively, this visual map can be temporarily superimposed on the design. This allows users to explore changes in elements while also preview the changes in real time.

This exploratory interface allows you to change design elements without having to redesign.

For example, suppose we have created an oak leaf using the Bezier path, and then we want it to become a maple leaf. If we were doing this in traditional design software, we would be constantly tweaking the Bezier paths. As designers, we seem to get used to this workflow. But in fact, it has a lot to do with how the software is used today, as the current Bezier path feature means that we can only do this manually.

Converting one shape to another can waste most of the work we did when we created the first shape. This adds to the time cost, which obviously shouldn’t be. It’s like we don’t have to walk across town to get to our neighbors.

Algorithms allow designers to be guided by their design intuition without being constrained by a particular function of a particular tool. The approach we’re using doesn’t prevent designers from taking control of their design, it just removes the behavior that the previous generation of design tools impose on the designer. This allows the designer to focus more on the design itself, rather than the implementation of the design tool.

 

Descriptive design

So far, we’ve talked about interfaces that are very similar to traditional maps. Just like any other map, you can add text labels to it if you wish. This allows us to change the design with command language, for example: “Give me a maple leaf”. When the results are presented to us, we can also say, “Make it more like an oak leaf.” This is already very powerful. But in fact, we can think even deeper.

In 2013 Tomas Mikolov et al. published a series of papers describing a series of low-dimensional mapping techniques for generating conceptual relationships between different words. Just like the map we discussed above, words that are closely related in real-world usage are placed next to each other in this vocabulary map. We can also perform algebra on real-world concepts. For example, they found the following expression:

Madrid — Spain + France

Closer to “Paris” than any other combination of words.

The same:

The combination of “King — Man + Woman” comes closest to “Queen”

This fascinating mechanic provides a whole new way of thinking about design tools. It allows us to apply visual concepts visually or verbally without the need for auxiliary abstractions and control systems.

For example, if we want to find something that is similar to Picasso’s work, but does not include his work from the Cubist period, we can do this:

Similarly, we can do similar things with voice messages or any other medium. In the past few years, similar technologies including “Style Transfer” and “Neural Doodle” have further expanded these mechanisms.

These techniques are already implemented in photo-sharing apps — not as features in design tools, but as novel photo filters, similar to those found in Instagram or Photoshop.

As was the case with Photoshop’s “Filter Bubble” feature in the 1990s, this novel manifestation of a new feature quickly became popular, but did little to conceptualize or extend the design process.

But as part of a larger, more comprehensive design framework, these technologies provide a powerful mechanism that allows us to explore and conceive ideas through direct manipulation of the concept space.

However, just as technologies are transformative, I think they have some flaws. As every designer knows, the hardest thing about design isn’t making individual decisions. The hardest part is reconciling decisions across many components to produce a cohesive whole.

As designers, we have to switch back and forth between multiple component decisions while keeping the whole in mind. Sometimes the decisions of these components may be in conflict with each other.

Like the Rubik’s cube, we can’t simply solve one face and then move on to the next. It’s gonna waste some of the work we’ve been doing. We have to deal with all of them at once. It can be a very complicated process, but learning how to deal with this kind of thing holistically is at the heart of being a designer.

While machine learning techniques discussed previously can help simplify decisions on these components, they do not fully address the most difficult parts of design. To help designers establish this professional awareness, let’s explore two other concepts.

 

Process organization and dialogue interface

It is much easier for machine learning systems to understand simple expressions that convey a single command or point of information than complex, multi-information expressions. However, one of the hardest parts of designing something is figuring out how to break down a complex system into simple, easy-to-understand concepts.

One of the most important things that design tools can help designers is in this simplification process.

Tools help designers create concise expressions by creating interfaces and workflows that guide users through a series of simple experiences and decisions, each of which is part of a larger and more complex task.

A good example of this approach is 20Q, the electronic version of the game Twenty Questions. Like traditional trivia games, 20Q asks users to think about an object or a celebrity, and then offers a series of multiple choice questions to find out what they’re thinking.

Usually the first question asked in this system is “is it an animal, plant, mineral, or concept?”

The next question is to try to distinguish between options that are at a deeper level than the information provided by the user. For example, if the answer to the first question is “animal,” the next question will be “Is it a mammal?” If the answer to the first question is “vegetable,” the next question will be “Is it usually green?” Each subsequent question can be answered with “yes,” “No,” “sometimes,” or “irrelevant.”

The 20Q system guessed the right person, place or thing 80% of the time after 20 questions, and 98% of the time after 25 questions.

The system uses a machine learning algorithm called a learning decision tree, which helps the machine guess the information given to the user in a minimum of steps and questions.

Using data generated by the user’s interaction with the system, the algorithm studies the relative value of each question in order to remove as many incorrect options as possible, ensuring that it can ask the user the most important questions first.

For example, if it is already known that the user is thinking about a celebrity, it is more useful to ask if the celebrity is still alive than if the person has written a book, since only a small number of historical figures are still alive, but many celebrities have published books.

While no single question can capture everything a user wants, relatively few well-chosen questions can reveal the correct answer with surprising speed.

In addition to helping the system understand what users are saying, the system also allows users to express their ideas more clearly and purposefully. At its core, the system can be thought of as a mechanism for discovering the best path through a large number of related decisions.

Each question and answer acts as a translation guide. The user gets closer to what she or he wants to say, while also helping the user explore all aspects of what he or she might want to think about and express.

This mechanism also extends into the realm of interface design, helping users to design their desired form by answering a series of questions. These questions can also be answered orally based on a more natural interaction pattern… Or gestures, so users don’t have to learn complex menu systems or tools at all.

Based on recent advances in machine learning, it is increasingly possible for machines to answer users’ complex, relevant design questions.

For example, a user can ask real questions that help him or her evaluate the design’s applicability to real users.

The conversation will mimic the form of a human conversation, and there’s a huge amount of data behind the machine about the design problem. It can help users solve problems better.

It may also be related to the ability of machines to simulate relevant constraints in the real world, such as matter, physics or chemistry.

By embedding this capability in real-time interactions, architects can quickly rule out bad ideas and produce fruitful results, which can save a lot of time.

In addition to “real-world constraints,” interactive instructions given by users are often not explicit — either because the machine’s knowledge is limited, or simply because the user is not articulate.

Expected to give a “best guess,” the system can provide more questions or alternatives to figure out what the user really thinks.

Therefore, this kind of session can not only effectively clarify the user’s real intention, but also can build a perfect machine knowledge base.

It is also a more natural recording mechanism that keeps an iterative update of the user’s thoughts and is easier to review and analyze than the traditional “action history.”

By spreading out the flow linearly and referring to the “information flow,” the user can examine the entire process of her or his mind and easily return to any node and work in a new direction while saving other designs.

 

conclusion

Many people seem to be worried that AI will take our jobs and put us out of work. I see the possibility of a more optimistic future, one in which we still have a role to play. In fact, we are still stronger.

In such a future, we won’t compete with objects, we’ll use them to expand what we can do, just like in past history. But to do that, we need to constantly remind ourselves of the nature of the tool. Tools are not just meant to make our lives easier. Their purpose is to give us a lever to see further and more beautiful scenery. Tools can lift stones, but man can build cathedrals.

 

Related articles

Experience Design in the Age of Machine Learning (PART I) — Implications for designers and data scientists creating systems for learning human behavior

Experience Design in the Age of Machine Learning (PART 2) – Implications for designers and data scientists creating human behavior learning systems

 


Patrick Hebron

Original link:

https://medium.com/artists-and-machine-intelligence/rethinking-design-tools-in-the-age-of-machine-learning-369f3f07ab6c

This article is from UXC original translation, if there are other uses please contact the author