preface

In recent years, the topic of artificial intelligence is very hot. In China, many bigwigs talk about machine learning and big data; In the US, PHD graduates in artificial intelligence are also highly sought after, earning salaries comparable to NFL quarterbacks. Ai has even become the talk of the Internet — as if not knowing ai is out of date.

As an iOS developer, I also keep track of and understand the artificial intelligence and machine learning in full swing. This article is to summarize what I have seen in Silicon Valley and Seattle, and to share with you my thoughts on artificial intelligence.

What is artificial intelligence?

We often hear about artificial intelligence (AI) in terms of Big Data, Machine Learning and Neural Network. So what’s the difference between these words? Let’s take a look at a short story.

There was once a programmer named Newton. He defined a method to calculate the speed of free fall:

Func getVelocity(time t: second) -> Float {return 9.8 * t}Copy the code

How did he get this method? Newton arrived at this formula after a lot of logical derivation and experimental proof, after he himself was hit by an apple. This is the traditional way of writing programs these days — after understanding the internal logic and truth of things, people define the method. To this day, most programs are written this way.

By artificial intelligence, machines define their own methods. Artificial intelligence can be achieved in many ways, such as by having a machine simulate a brain and then think like a human to define methods. Machine learning is just another approach to artificial intelligence, defined by big data. If there had been machine learning in Newton’s time, it would have worked out the speed of free fall like this:

  1. Collect as much free fall data as you can. Suppose the data collected are as follows
head Velocity (m/s) Time (s)
Galileo 9.8 1
Newton 19.6 2
Leonardo Da Vinci 29.4 3
Aristotle 30 4
  1. Analyze the data. Machine learning will determine that Aristotle’s data is wrong and reject it. The other three met the same pattern.

  2. Define methods. Based on the above data, machine learning concluded that speed = time * 9.8.

As more data is collected, machine learning gets more accurate. In fact, the human learning process is very similar: there is a lot of knowledge in books (processed data), we look at it, make sense of it, and come to our own conclusions.





Kepler is a famous human machine learning practitioner: he spent the first half of his life looking at the stars and recording the data he observed. Analyze the data with logic and understanding for the rest of your life; Finally, the Kepler-Newton model of planetary motion is obtained. The model is then used to predict the movements of other planets, while new data is used to refine the model’s parameters to near perfection.

So what’s the advantage of data versus people? I think it’s faster and more accurate. When defining a method, people always need to consider the cause and effect before and after the method, the logical relationship, and all kinds of circumstances, which sometimes need to spend a lot of time to study and demonstrate, and sometimes ignore some extreme cases, resulting in the definition of loopholes. And data in the Internet age, the cost of access is very low. In this case, the easy availability of large amounts of data makes method definitions faster and faster; At the same time, the wide range of data coverage in reality also makes the sequential definition of the method more accurate.

Dr. Wu Jun summarized the advantages of big data in his book The Intelligent Age as follows: When “unable to determine the causal relationship between, data provides us with the new method to solve the problem, the information contained in the data that can help us to eliminate the uncertainty, and the correlation between data to some extent, can replace the original causation, help us to get the answer, this is the core of big data.”

Let’s go back to Newton’s free-fall experiment. In fact, the experimental data obtained by machine learning might be the following:

On the night of September 15, light rain and breeze. Galileo dropped a shot on the Leaning Tower of Pisa, with a mass of 4kg, an initial speed of 0 and a time to reach the ground of more than 6s.

There are many features in this data sample: time, air wetness, wind force, height of the Leaning Tower of Pisa, mass of the shot put, initial velocity, time to reach the ground, and so on. So what is the speed of free fall associated with? If the machine is left to its own devices, this is called unsupervised learning. If we tell the machine, don’t care about quality and time, focus on observing time, that’s called supervised learning. The latter is based on human intelligence, so that machine learning has a general direction.

So far, machine learning is hardly “intelligent” — it’s just faster and more accurate answers. If this is all, AlphaGo can thoroughly study the games of all nine chess players (the winners and losers of these games are already determined, which is equivalent to artificially marking the good or bad moves, so it is supervised learning), and its level is only ten chess games. Compared with nine chess, AlphaGo is a little better and plays a little more stable. The truth, however, is that AlphaGo is far better than the best human game.

In fact, when AlphaGo plays chess, the system will tell it whether the winning rate has improved after every move. This constant feedback mechanism allows AlphaGo to improve its game in real time and encourages it to try ways that humans have never played before, thereby surpassing humans. This type of feedback training is called reinforcement learning.





And finally, a little bit about neural networks. In my superficial understanding, neural networks are composed of neurons, and each neuron has a corresponding function. For example, to identify female dogs in a pile of photos of animals, the first neuron is used to determine which animals are dogs, and the second neuron is used to distinguish between males and females.

In the example above, we ask the first neuron to make a judgment and then pass it on to the second neuron. In other words, the input of the latter is the output of the former, which is the concept of neural network layering. So AlphaGo, a large neural network, is based on the principle of neuron layering.

Because the development of machine learning is unique in artificial intelligence. So talking about AI is almost like talking about machine learning.

What are the applications of ARTIFICIAL intelligence in iOS development?

I’m an iOS engineer. What’s my business with artificial intelligence? Artificial intelligence has been around for a long time on iOS and is likely to be embedded in our daily development, so it’s important to be sensitive to it. Here I will share the application of artificial intelligence on iOS.

For one thing, Steve Jobs put Siri, the intelligent voice assistant, on the iPhone years ago. Siri is the first successful application of AI and machine learning on mobile: it combines Speech Recognition and Natural Language Processing (the former is part of the latter, of course). Later, due to apple’s closed gene, Siri’s data volume has not been up, and we all know that data volume is the key to improve the level of artificial intelligence, so now Siri has been very weak.





Similar to Siri, Facebook has integrated Chatbots into its Messenger App. At LAST year’s F8, I remember vividly how they touted this as a new era of App development — apps led by chatbots and artificial intelligence replacing traditional mobile apps, and Messenger moving from a chat App to a platform or even an operating system. This is similar to wechat’s small program strategy, but with more artificial intelligence gimmicks. Unfortunately, many ai gurus tell me that chatbots are a long way from maturity.





Facebook Chatbot, artificial Intelligence-optimized shopping customer service

So far, artificial intelligence has not been successful on iOS. That is, until Prisma. Alexey developed Prisma after reading two papers, “Neural Algorithms for Artistic Styles” and “Synthesis of Arts and Sciences using Neural Convolutional Networks.” The basic flow is as follows:

  1. Users upload photos
  2. It sends the photo to the cloud, and the neural network in the cloud analyzes and identifies the photo
  3. Output a redrawn artwork
  4. Download the repainted work to your mobile phone





The most difficult part of the App is the time-consuming second step, which is the model analysis to study the style of the photo. Alexey optimized the details of the neural network so that Prisma could redraw an image in seconds. In the subsequent App iteration, the neural network model was directly deployed on the mobile terminal in order to make it faster and solve the demonstration problem of overseas users connecting to the cloud. Using the iPhone’s powerful processor to draw images directly offline, Prisma is the first mobile application that can run a style-switching neural network offline. Prisma can process an image in less than half a second, has won iOS App of the Year for its combination of art and technology, and has hundreds of millions of users.

One last App: Topology Eyewear. It’s an e-commerce app for custom-made glasses. It features facial recognition of the user and then renders on the phone to look like they are wearing different glasses.







The success of third-party apps has stimulated dachang’s attention in related fields. Snapchat, Instagram and WhatsApp have all introduced ARTIFICIAL intelligence to their filters. At the same time, Facebook and Google are also making AI frameworks lighter to deploy on mobile devices. At WWDC 2017, machine learning was the buzzword of the conference and Apple officially launched Core ML. It covers both visual recognition and natural semantic processing, and provides many well-trained models as well as highly customized model generation tools. The ease of use made Core ML a hit with developers when it first came out.

As Kai-fu Lee said in his graduation speech at Columbia University, “In the future, with the cost of hardware, software and network bandwidth falling, the cost of ARTIFICIAL intelligence is almost like electricity.”

Where do iOS developers go from here?

First of all, I think iOS and AI are not opposites, but complementary. IOS applications need ARTIFICIAL intelligence to improve their efficiency and expand their functions, and ARTIFICIAL intelligence technology needs to be implemented as products on the iOS platform. The smart age is more about upgrading and complementing the mobile age than replacing it. So there’s still a market for iOS development, and we don’t have to worry about AI putting us out of work.

But iOS developers need to embrace AI. This session of the try! At Swift, there were two presentations dedicated to machine learning; If you’re in the habit of reading blogs, Facebook’s iOS tech column in recent years has been filled with new features that borrow from artificial intelligence. For example, In The Engineering Behind Social Recommendations, The Facebook Team in New York used big data and artificial intelligence to extract a large amount of relevant information in order to better recommend restaurants and travel places to users on The mobile terminal. Then make recommendations based on the user’s state and location. Google has incorporated AI into the vast majority of iOS applications. If iOS developers reject AI, we may never be able to make apps that satisfy our users. Just as algorithms and computer systems are fundamental skills for programmers today, artificial intelligence will be one of the essential skills for programmers in the future.

Ai will open up many new opportunities for iOS development. AI = IA + II + AA, says Professor Michael Jordan, a leading figure in the field of artificial intelligence. As an iOS developer, I think so. Here’s how I understand it:

Artificial Intelligence = Intelligence Augment + Intelligent Infrastructure + Automatic Algorithm.

  • Intelligence amplification: The expansion of our human intelligence. Google search, for example, expands our access to knowledge. Big data could help Kerr refine Golden State’s training program and tactical options.
  • Smart phone building: Internet of Things. Amazon’s smart home and Amazon Go, Uber’s driverless cars, and IBM’s smart Cities will all be tailored to each user’s needs.
  • Automated algorithms: a variety of artificial intelligence tools. Deep learning, enhancement learning, neural network improvements, and the introduction and application of TensorFlow, Caffe, MXNet and other frameworks fall into this category.

Either way, iOS development has a lot to offer. Intelligent amplification of the App can be directly to the user; Smart phone construction must require iOS developers to complete the corresponding connection to the user in the terminal; The use of automated algorithms will make iOS apps more powerful.

Finally, how can you learn ai as an iOS developer? Core ML is an official tool, so it’s a good place to start. The Core ML development steps shown by WWDC are divided into the following three steps:

  1. Machine learning models are derived from other platforms or frameworks
  2. Import the model into Xcode, which automatically generates the corresponding Swift interface
  3. Programming using the Swift interface

The usage scenarios are as follows:

  • Sentiment analysis
  • Object recognition
  • Personalized customization
  • Type conversion
  • Music label
  • Gesture recognition
  • Natural semantic recognition





Object recognition applications displayed at WWDC

Google’s TensorFlow, Facebook’s Caffe, and Amazon’s MXNet can all train great Core ML models for iOS development. These several frameworks we can also understand, here vomiting blood suggestions, research artificial intelligence framework, not recommended to see documents, go directly to see English documents better effect. Because these frameworks change so quickly, the material in the book goes out of style so quickly, and some theoretical knowledge is better read in the first hand.

conclusion

There are also a lot of people are not optimistic about artificial intelligence, that the hype is too hot, just a concept, too much bubble, it is difficult to be realized. As an iOS developer, from a technical point of view, the current ARTIFICIAL intelligence technology has been enough to greatly promote the progress and expansion of our App. I hope this article will give you some inspiration as you develop or follow iOS.

reference

  • Smart era
  • Machine learning book for everyone
  • The present and future of artificial intelligence
  • New Battlefield for Mobile Application AI? Apple’s latest Core ML model builds intelligent applications based on machine learning
  • Bringing Machine Learning to your iOS Apps
  • Everything a Swift Dev Needs to Know About Machine Learning