The full text altogether
2737Word, estimated learning time
5minutes

Credit: pexels.com/bertellifotografia

Recently, something happened in the big ecosystem data systems space: Cisco (CSCO) is combining an AI hardware framework with a new deep learning server powered by eight Gpus. Wikibon principal Analyst James Kobielus said in a recent interview that Cisco is committed to supporting Kubeflow in ai. “Kubeflow is an open source tool that makes TensorFlow compatible with the Kubernetes container orcheation engine,” Kobielus said.

TensorFlow is an open source software library for numerical computation. Its architecture is flexible and can be easily applied to a variety of platforms (GPUs, TPUs,CPUs) and multiple devices (desktop computers, server clusters, various mobile and edge devices). TensorFlow was originally developed by the Google Brain Team, part of Google’s ARTIFICIAL intelligence division. It has a flexible numerical computing core, which is a good helper of machine learning and deep learning. They developed a new deep learning server powered by eight cpus.

JamesKobielus believes containerization is ushering in a new era in software. Containerization is reshaping the landscape of almost every information technology software platform, and is having an impact in the fields of artificial intelligence and machine learning. Cisco, for example, recently announced that it was improving the containerization of its TensorFlow stack. Kobielus said.

When I talk about highly complex AI, I mean things like TensorFlow. For example, when a user builds a deep learning model in TensorFlow, assume that the model will be used to develop self-driving cars. Of course, there will be deep learning models built into the car that can use sensor data for things like object recognition. There will also be deep learning models within the vehicle control area, possibly for traffic jams within a given area.

According to Kobielus, Apache Park often works with the Hadoop Distributed File system (HDFS) as a persistence layer or storage layer. Spark is one of the first choice for machine learning development environments and is memory-oriented. It is increasingly used for real-time ETL and data preparation for several mixed deployments equipped with TensorFlow, and also tends to be packaged.

Kubeflow

Software containers enable enterprises to easily move workloads between different environments. Essentially, Kubeflow is a Kubenes-based framework and tool for building and training machine learning models. These models may have been packaged from the beginning. Some of the major topics in container research include Kubernetes choreography, machine learning, and deep learning.

Containerization of DevOps workflows is fast becoming the norm for all application development. This is especially true in the development of AI applications, Kobielus said. Kubeflow enables DevOps to manage these applications point-to-point in a container-orchestrated environment.” Kubeflow is becoming the key glue for the smart device industry, including the AI device space, and supports the containerization of AI. Azure’s new machine learning program supports container-based model management and development, as does Apache Spark.

Kubeflow makes “scaled” machine learning models and then deploys them into production in the simplest possible format, he says. Because machine learning researchers use different tools, the main goal is to customize the stack to user needs and provide easy-to-use machine learning stacks wherever they are already running within Kubernetes.

Photo credit: pexels.com/@tomfisk

Machine learning

Machine learning has evolved into a form of data analysis for identifying patterns and predicting probabilities, and exists as a branch of artificial intelligence research. By feeding the model data with “known” answers, the computer can train itself to predict future responses to unknown situations. Machine learning has had considerable success in solving specific tasks, and it is estimated that AI and ML will be the main catalysts driving cloud computing. To work effectively, machine learning technologies need to learn efficiently and be integrated with cloud technologies, including containerization.

With this in mind, Google has developed Kubeflow, a portable, composable, and scalable machine learning stack built on top of Kubernetes. Kubeflow provides an open source platform for transferring ML models by attaching itself to containers and performing calculations alongside the data rather than in a stack. Kubeflow helps solve the basic problem of implementing an ML stack. The construction of production-level machine learning solutions requires multiple data types. Sometimes, using different tools to build stacks can complicate the algorithm and produce inconsistent results.

The advantages of deep learning

Deep learning is a branch of machine learning that allows deep neural network computers to “learn from experience” and understand the world using hierarchical order. This hierarchy supports the use of complex concepts by computers by building complex concepts on top of simple ones. Real-world organizations have combined machine learning and open source platform technologies in ways that the original developers of these independent open source projects never anticipated. Kobielus said.

I think deep learning and AI will be very important and necessary to bring the cloud computing revolution to every device. We’re making progress in mobile computing, and AI is going to be around everyone and on every machine, smart devices and autonomous devices.


Such innovations are already happening in areas such as face recognition and speech recognition. However, it needs to be done in a standardized way, or it needs to be applied to an edge deployment environment through standardized cloud computing, which is packaged and uses Kubernetes. He continued:

As a developer, I think the key is to be able to package models that perform different tasks and be able to wire them together in a choreographed way so that they can run together as components in a distributed application environment. In addition, this allows these models to be monitored and managed in real time, typically through the flow plane.

Eclipse and the Cloud Local Computing Foundation (CNCF) recently announced that they are working together to build a packaged open source stack and the tools needed to deploy deep learning and machine learning containers to edge devices. The Eclipse Foundation provides a business-friendly environment for open source software, innovation, and collaboration.

A few months ago, the Eclipse Foundation launched a project called Ditto, sponsored by Bosch. The project focuses on using digital twin technology to develop ARTIFICIAL intelligence designed to operate in a containerized manner on edge devices.

Pexels.com/artunchained

Data management

Data management is about managing and maintaining data and metadata assets. In an interview, Kobielus said:

I like to use the word ‘management’. The industry manages the stack on several levels. The community decides what is accepted as a project, what is submitted to a working group to build, and then what eventually rises out of the sandbox, incubated in some governance of this community. There is vendor oversight, that is, per-vendor, cloud oversight, and server oversight.

Kobielus sees this type of data management as a necessary part of this new era. Something will be accepted by the general public and start to develop on its own. Some things fall by the wayside, like when Hadoop started, he said:

I remember snippets of Hadoop, such as the Mahout machine learning library. Some have been adopted, but not yet at the level of the Spark library.

He thinks data scientists are the core developers of ARTIFICIAL intelligence, yet they haven’t realized they need to know more about containers, more about Kubernetes, “because it’s going to be in their toolbox as a target environment to deploy their models to.”

He concluded by saying that data scientists, AI developers, data architects and others in the industry all need to understand how and why these new technologies are now core components in their data stacks. Everyone involved needs to understand this or they will simply be left behind by the march of the data age.


Leave a comment like follow

We share the dry goods of AI learning and development. Welcome to pay attention to the “core reading technology” of AI vertical we-media on the whole platform.

(Add wechat: DXSXBB, join readers’ circle and discuss the freshest artificial intelligence technology.)