{% note info %}

Online reading of thesis

{% endnote %}

Questions

heuristic fundamental
Less data, refers to the sample is small;

In my opinion, the low quality of the data is not because the data is unreliable, but because the feature space of the data is small and contains little information, and no effective conclusions can be drawn after modeling.

An early view was to train the model directly in such cases where the data were scarce and of low quality, and to focus on improving the accuracy of the model. I don’t think it works, at least not with machine learning. Machine learning studies statistical problems and ultimately mathematical methods. It is not concerned with the specific meaning of data characteristics, but sensitive to the distribution of data. What we end up with with machine learning is expectation/probability, which is about probability. So of course the more samples the better, the more accurate it is.

Another idea is that data holders form coalitions to train models. I think federal learning is on the right track.
1. How do I clean data?
The ideal approach is to bring data directly from all participants together, regardless of legal, technical and cost constraints.

Even then, because the data held by different organizations is more or less heterogeneous, the data must be aligned when aggregated, which means the loss of non-intersection data.

In that case, why not do alignment first and then aggregate the data? And encrypts the data to be aggregated. This aspect does not have much influence on the final training effect; On the other hand, it satisfies the public’s demand for data privacy.
2. What is data poisoning?
Horizontal federated learning increases the sample space and improves the accuracy of the model.

The vertical federated semester broadens the feature space and enables the analysis results to cover more fields, resulting in a 1+1>2 effect
What exactly is Federated Migration Learning?
In federal learning, as many participants as possible?

Abstract

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
strengthening N. strengthen propose [C]
beyond Later than; later than; later than secure federated learning Secure federated learning
federated transfer learing Federated transfer learning mechanism [C]
compromise [C] compromise [D] compromise CCS Abbr. Council of Communication Societies
methodology N. methodology phrase N. phrases
GDPR General Data Protection Regulation General Data Protection Regulation

{% endhideToggle %}

There are two major challenges in AI right now:

  • In most industries, data exists as islands
  • Strengthen data privacy and security

There are three aspects of Secure Federated Learning:

  • Horizontal Federated Learning
  • Vertical Federated Learning
  • Federated Transfer Learning

This paper introduces some definitions, architectures and applications of the Federated learning framework, and surveys the existing work in federated learning. In addition, the paper also proposes how to build data network between different organizations based on federated mechanism, as a solution to share knowledge without disclosing user privacy.

Introduction

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
Go Go n. defeat Vt. To beat
cutting-edge Adj medical care Medical care
walks of life All walks of life inevitable Unavoidable adj
availability Availability permission N. license
hard copy copy grant Awarded n.
commercial [C] citation N. reference
fuse Fusion n. if not impossible If that’s possible
recommendation N. recommended complicated administrative procedure Complex management procedures
integration Integration of n. resistance Resistance n
institution Public institution issue Problem of n.
cause great concern Make a great impact data breach The data were leaked
protest Protest n enforce N. The implementation of
protect [C] plain Adj. Simple
stiff fine Hard fine and severe punishment violate N. In violation of the
bill N. bill act N. law
enact Promulgated by n. Cyber Security Law Cyber Security Act
General Principles of Civil Law The general principles of the civil law tamper Tampering with
tamper with Tampering with conduct Carry out
obligation Obligation n pose [C]
dilemma Dilemma n data fragmentation Pieces of data
to be more specific Specifically, to be precise be responsible for right To be responsible for the
promote Vt. To promote complaint Submissive adj

{% endhideToggle %}

Thanks to the infusion of market funds and the support of big data, AI has enjoyed an unprecedented boom since 2016.

In most areas, data is limited or of low quality, making AI technology incredibly difficult to implement. One possibility is to transport data from different institutions to one place and merge them together. However, due to industry competition, privacy security and complex management procedures, even data integration between different parts of the same company can encounter great resistance.

Facebook’s privacy breaches caused widespread protests and countries around the world began to tighten data security and privacy laws. This also brings new challenges to the data transactions that are commonly used in AI today.

GDPR:

  • Forbid autonomous modeling and decision making
  • Explain model decisions
  • Gives users the ability to forget data, allowing users to delete or undo their personal data
  • Consider data privacy at the design level
  • Use clear and simple language to explain user permissions for data use

The traditional data processing model in AI involves some simple data transactions models, in which one party collects and transfers data to another party responsible for cleaning and fusing the data. Eventually a third party takes the integrated data and builds a model that other parties can use. The built model is often the final product, sold as a service. Traditional processors face challenges from new regulations, and users may fall foul of the law by not knowing how the model will be used in the future. As a result, we are in a dilemma where the data is an island, and we are prevented in many situations from collecting and merging data from different locations for AI processing.

In order to promote federated learning, the authors hope to shift the focus of AI development from improving model performance to exploring data integration methods that comply with data privacy security laws, which is what much of the CURRENT AI field is doing.

An Overview of Federated Learning

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
effort N. efforts personalizable Adj
optimization N. optimization massive Adj. A lot of
partition [C] decentralized Adj. decentralized
preliminary Preliminary adj foundation N. foundation
multiagent theory Alternative subject theory data mining Data mining
workflow [N] consolidate [C]
respective Separate adj conventional Adj
guarantee N. guarantee identify [C]
simulation Simulation n proof Validation n.
complete Complete adj desirable Desirable; desirable
partial Adj. Local disclosure Adj. Disclosure
semi-honest Adj verification Verification n
reveal Vt collude Collude with
well-defined Adj desire N. Request
line of work industry anonymity Anonymity n
diversification Diversity n obscure N. That… Obscure, obscure
restore [C] approach to Approximately equal to, lead to The method of
transmit [C] homomorphic encryption Homomorphic encryption
adopt [C] additively On top of STH
polynomial approximation Polynomial approximation intermediate Intermediate adj
constrain Vt. The drive

N. constraints
scale Size n.
poisoning N. poisoning loophole Loopholes in n.
variant [C] constant fraction Constant ratio
blockchain N. block chain facilitate [C]
leverage Vt. The use of scalability N. extensibility
robustness N. robustness categorize Classification of n.
identical Identical adj regional Local adj
scheme Plan n intersection N. intersection
address Try to solve it straggler N. left behind
partition [C] compression N. compression
bandwidth N. bandwidth preserving [C]
regression Return n. linear Adj., linear
entity N. entity applicable Adj
commerce Trade n revenue N. income
expenditure [C] retain Keep in mind
corrupted Adj. Destroyed geographical Adj. Geography
restriction N. limit portion Part n.
exceeding N. Beyond decrypt N. The decoding
converge Convergence, convergence, convergence subject Be liable to STH. the
Generative Adervasarial Network GAN generates adversarial networks entity N. entity
alignment N. alignment lossless Adj. Lossless
gather Collected n. scale Vi. Resize
parallel Adj. Parallel randomness N. randomness
secrecy N. confidentiality inability N. No ability
terminate N. The end oblivious Adj. Forgotten
overall Adj. All commercialize Commercialization n.
incentive Motivation n manifest Prove STH
permanent Permanent; permanent better off Come up to a certain amount; become rich; become prosperous
consensus N. consistent

{% endhideToggle %}

Several important elements of optimization problems in federated learning:

  • The cost of communicating between a large number of locations
  • Uneven distribution of data
  • Equipment reliability

Definition of Federated Learning

Assume a data owner] {F1,…, FN} \ {\ mathcal {} F _1, \ \ cdots, \ mathcal {} F _N \} {F1,…, FN} joint hope that through their own data {D1,…, DN} \ { \ mathcal {D} _1, \ cdots \ mathcal {D} _N \} {D1,…, DN} to train a machine learning model. A common way to do this is to put the data together: D=D1∪… DN\mathcal{D} _1\cup\cdots\mathcal{D}_ND=D1∪… DN {M}_{SUM}MSUM

A federated learning system is a learning process in which data owners cooperatively train the model MFed-mathcal {M}_{FED}MFED, In this process, any data owner Fi\mathcal{F}_iFi will not expose its data Di\mathcal{D}_iDi to other data owners

VFED mathcal{V}_{FED}VFED is used to express the accuracy of model MFED mathcal{M}_{FED}MFED. VFED mathcal{V}_{FED}VFED must be very close to VSUM mathcal{V}_{SUM}VSUM (MSUM\mathcal{M}_{SUM}MSUM)

Let δ>0\delta >0 δ>0, if


V F E D V S U M < Delta t. (1) \vert \mathcal{V}_{FED}-\mathcal{V}_{SUM} \vert < \delta \tag{1}

This federated learning algorithm is said to have aδ \deltaδ loss of accuracy

Privacy of Federated Learning

Privacy is an important attribute in federated learning, which requires security models and analysis to provide meaningful privacy guarantees. This paper introduces some privacy techniques, identification methods and potential challenges in preventing indirect privacy leakage in federated learning.

  • Secure Multi-Party Computation (SMC) :

    The SMC security model naturally includes multiple participants and provides security validation in a well-defined simulation framework to ensure complete zero knowledge, where each participant knows only its own inputs and outputs. Zero Knowledge is desirable, but it usually requires complex computing protocols and may not be implemented effectively. In the case of low security requirements, an SMC-based security model can be constructed to pursue efficiency.

    Studies conducted:

    1. MPC Protocols: Used for training and validation of models without leaking sensitive data
    2. Sharemind: One of the most advanced SMC frameworks
    3. 3PC model: Consider security in a semi-honest and malicious assumption
  • Differential Privacy:

    In Differential Privacy, k-anonymity and diversification, noise is added to the data. Or generalization methods can be used to hide some sensitive attributes so that a third party cannot distinguish the differences between individuals, making it impossible to reconstruct data and protecting the privacy of users. Yet these methods are still fundamentally about moving data elsewhere, and they also require trade-offs between accuracy and privacy.

  • Homomorphic Encryption:

    In homomorphic encryption, data and models themselves are not transported, nor can they be guessed from other participants’ data. As a result, leaks are almost impossible at the level of raw data.

    In practice, Additively Homomorphic Encryption is widely used, and polynomial approximation is also used to evaluate nonlinear functions in machine learning algorithms. This all leads to trade-offs between precision and privacy.

  • Indirect information leakage

    Some precursors to federal learning expose intermediate results. For example, when uploading parameters from optimization algorithms (such as SGD algorithms), gradient leakage combined with exposure of data structures may result in the leakage of important data information due to the lack of security guarantees.

    Members of a federated learning system can maliciously attack other participants to learn from their data by embedding backdoors.

Researchers are also starting to consider the introduction of blockchain as a platform to facilitate federated learning. Hysung Kim et al. proposed a blockchain-based Federated Learning Architecture (BlockFL), which utilizes blockchain to exchange and verify updates of local learning models between mobile devices. They also consider an Optimal block generation, scalability and robustness of the network.

A Categorization of Federated Learning

We can classify federated learning by using the distribution characteristics of data as criteria.

The data held by data owner III is represented by the matrix Di\mathcal{D}_iDi, where each row of the matrix represents a sample and each column represents a feature.

Also, some datasets may contain a label field. We use X\mathcal{X}X to represent feature space, Y\mathcal{Y}Y to represent label space, I\mathcal{I}I to represent sample ID space, The three formed a complete set of training data (I, X, Y) (\ mathcal {I}, \ mathcal {X}, \ mathcal {Y}) (I, X, Y).

The feature space and sample space of participants’ data may not be exactly the same, so we can base on the distribution of data in the feature space and sample ID space of different parties. Federated learning is divided into horizontal Federated learning, vertical Federated learning and Federated transfer learning.

  • Horizontal Federated Learning:

    Sample-based Federated learning (SAMple-based Federated Learning), in which data sets share the same characteristics but each sample is different. Can be described as:


    X i = X j . Y i = Y j . I i indicates I j . D i . D j . i indicates j (2) \mathcal{X}_i=\mathcal{X}_j,\quad\mathcal{Y}_i=\mathcal{Y}_j,\quad\mathcal{I}_i\neq\mathcal{I}_j,\quad\forall\mathcal{D} _i,\mathcal{D}_j,i\neq j \tag{2}

    For example, for two banks in different regions, their users come from their respective regions, and the intersection of sample space is very small. But their businesses are very similar, so the feature space is the same.

    == Security definition ==(Security Definition) : A typical federated learning system assumes that participants are honest and servers are honest-but-curious, meaning that only servers are likely to leak participants’ privacy. But participants can also be up to no good, which presents additional privacy challenges.

  • Vertical Federated Learning:

    Also known as feature-based Federated learning, it applies to the situation where two datasets share the same sample ID space but different feature space.

    Take, for example, a bank and an e-commerce company in the same city whose customers are the majority of residents in the area. So there’s a lot of overlap in their sample space. But whereas banks keep records of users’ income, spending and credit ratings, e-commerce companies keep historical information about users’ browsing and shopping. They have very different feature Spaces.

    Vertical federated learning is a process of aggregating different features and calculating training losses and gradients to build models in a privacy-protected manner with data from all cooperative participants. In this federal mechanism, all participants have the same identity and status, and the federal system will help everyone to establish a common Wealth strategy. That’s why it’s called Federated learning.


    X i indicates X j . Y i indicates Y j . I i = I j . D i . D j . i indicates j (3) \mathcal{X}_i\neq\mathcal{X}_j,\quad\mathcal{Y}_i\neq\mathcal{Y}_j,\quad\mathcal{I}_i=\mathcal{I}_j,\quad\forall\mathcal {D}_i,\mathcal{D}_j,i\neq j \tag{3}

    == Security definition ==(Security Definition) : A typical vertical federated learning system assumes the presence of honest-but-curious participants. For example, in a situation with only two participants, there is no collusion between the two parties and at most only one party is attacked by the opponent and information is leaked. Security can then be defined as such that an adversary can only learn from the corrupted client data, but not from the uncompromised client data. To facilitate secure calculations between the two parties, a semi-honest Third Party (STP) can be brought in and assumed not to collude with other participants. After learning, each participant can only hold model parameters associated with its own features. So in reasoning, both parties need to work together to produce the output.

  • Federated Transfer Learning:

    Federated transfer learning is applied in scenarios where two data sets are different in both their sample space and feature space.

    Suppose you have two institutions, a bank in China and an e-commerce company in the United States. Due to geographical limitations, there is only a small overlap between the user groups of the two organizations. On the other hand, there is only a small overlap in the feature space due to the different businesses.


    X i indicates X j . Y i indicates Y j . I i indicates I j D i . D j . i indicates j (4) \mathcal{X}_i\neq\mathcal{X}_j,\quad\mathcal{Y}_i\neq\mathcal{Y}_j,\quad\mathcal{I}_i\neq\mathcal{I}_j\quad\forall\mathc al{D}_i,\mathcal{D}_j,i\neq j \tag{4}

    == Security definition == : A typical federated migration learning system consists of two participants and has the same security definition as a vertical federated learning system.

Architecture for a federated learning system

The architecture of a horizontal federated learning system

In the horizontal federated learning system, k participants share the same data structure and learn a machine learning model together through a parameter server or a cloud server. The assumption is that the participants are all honest and the server is honest-but-curious, so leaks from any participant to the server are not allowed. The training process of the level federated Learning system consists of the following four processes:

  1. Participants compute training gradients locally, Use encryption/differential privacy/secret sharing techniques to mask a selection of gradients, The masked result is then uploaded to the server
  2. The server performs secure aggregation without learning about any of the participants
  3. The server returns aggregated results to the participants
  4. Participants update their models with decoded gradients

Iterate these four steps until the loss function converges, and the whole training process is completed.

This architecture does not rely on special machine learning algorithms (such as Logistic Regression and DNN), and all participants can share the parameters of the final model.

== Security analysis == : If SMC or homomorphic encryption is used to aggregate gradients, this architecture can be used to prevent data leakage caused by semi-honest servers. However, in the joint training of other security modes, the architecture of the horizontal federated learning system is vulnerable to attack by malicious participants by training a Generative Aderversarial Network (GAN).

Architecture of vertical federated learning systems

Suppose companies A and B train A model together, and their business systems have their own data. In addition, Firm B holds the label that the model needs to predict. For the sake of data privacy and security, A and B cannot exchange data directly. In order to ensure the confidentiality of data during training, a third party collaborator C can be introduced. Suppose C is honest and does not collude with A or B, while A and B are honest-but-curious about each other. Such a trusted third party C is usually served by an authority such as the government, or a secure computing node (such as Intel Software Guard Extension, SGX). Vertical federated learning systems usually consist of two parts:

  • Encrypted Entity Alignment. Because the user populations of the two companies are different, the system uses cryptographic-based user ID alignment techniques to ensure that no data is exposed in the common user sets of both companies. During entity alignment, the system does not expose user data except for overlap.

  • Encrypted Entity Alignment. Once the common entities are determined, we can use the data from those entities to train a machine learning model. The training process can be divided into the following four steps:

    1. Partner C creates encryption pairs that distribute public keys to A and B
    2. A and B exchange intermediate calculations of gradients and losses
    3. A and B respectively calculate the encrypted gradient and add additional masks, and B also needs to calculate the encrypted loss, and THEN A and B send their encrypted value to C
    4. C decodes and returns the decoded gradient and loss to A and B. A and B remove the unmask on the gradient and update model parameters accordingly

In the process of entity alignment and model training, the data of A and B are stored locally, and the intersection part of data participating in training will not lead to data privacy leakage. C Data breaches caused by C may or may not be considered privacy violation. In this scenario, in order to further prevent C from learning information from A and B, A and B will add encrypted Random Mask to hide their information from C. Thus, both participants have reached their goal: to jointly train a common model through federated learning.

Because throughout the training process, each participant receives the same loss and gradient as if the model had been trained with data collected in the same place and no privacy constraints. Therefore, the model is loss-free, and its efficiency depends on the cost of communication and the cost of encrypted data computation.

During each iteration, the size of data transferred between A and B depends on the number of overlapping samples. Therefore, the distributed parallel computing technology can be used to further improve the efficiency of the algorithm.

== Security analysis == : The training protocol will not leak any information to C, because all C learns is the gradient after adding the mask. The randomness and security of the matrix after adding the mask are guaranteed. In such A protocol, player A learns its own gradient at each step, but this is not enough for it to learn anything about B. Because the security of the scalar product protocol is based on the basic fact that you cannot solve more than n unknowns from n equations.

Architecture of federated Transfer learning System

In the vertical federated learning example above, where A and B have only A small sample intersection, we want to learn the labels of all the data sets of participant A. Vertical federated learning framework only operates on the overlapping part of the data. To extend its coverage to the whole sample space, we introduce transfer learning. This does not change the overall framework of vertical federated learning, but the details of the exchange of intermediate calculations between participants A and B do.

Transfer learning involves the learning process of common representations between the features of A and B, And using source-domain party tag to predict target-domain party tag error minimization. Therefore, the gradient calculation of A and B is not the same as in vertical federated learning. In reasoning, each player still needs to calculate its prediction.

Reward system

To fully commercialize federated learning across different organizations, a level playing field and incentives need to be developed. Once the model is built, its performance can be represented in specific applications and can be recorded in a permanent data-recording mechanism (such as blockchain). Organizations are more comfortable providing more data, and the effectiveness of the model depends on the contribution of the data provider to the system. The model’s effectiveness benefits all participants in the federated mechanism, which in turn continues to motivate more agencies to participate in data federation.

This architecture takes into account not only privacy protection and the effectiveness of collaborative multi-agency modeling, but also how to reward institutions that contribute more data and how to enforce incentives through consensus mechanisms. Therefore, federated learning is a closed loop learning mechanism.

Related Work

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
originality N. creative devote Vt. To
garbled Tampering with follow-up The subsequent
allocate Distribution of n. autonomy N. autonomy
cope Deal with stringent Adj
regulatory Adj IID Independent Identically Distributed
coordination N. coordination convergence bound Convergence constraint
manage [C] interoperability N. interoperability
heterogeneous Varied adj premise Presupposition n

{% endhideToggle %}

Federated learning allows multiple participants to collaborate to build a machine learning model while the private training data remains private. As an emerging technology, federated learning has several creative routes, some rooted in existing fields.

Privacy-preserving machine learning

Federated machine learning can be regarded as privacy-preserving decentralized collaborative machine learning. It is closely related to multi-party privacy mechine learning.

Federated Learning vs Distributed Machine Learning

At first glance, horizontal federated learning looks a lot like distributed machine learning. Distributed machine learning includes many aspects: distributed storage of training data, distributed operation of computing tasks and distributed distribution of model results.

Parameter Server is a typical element of distributed machine learning that can be used to speed up the training process. The parameter server stores the data on the distributed work nodes and distributes the data and computing resources through the central scheduling nodes, which improves the efficiency of training.

In horizontal federated learning, the work node is the data holder, who has full autonomy over the local data and can decide when and how to join federated learning. In its parameter server, the central node always retains control. Therefore, federated learning faces a more complex learning environment. In addition, federated learning emphasizes the protection of data privacy for data holders during training. Effective privacy protection methods can better deal with the increasingly strict privacy protection and data security management environment in the future.

As in distributed machine learning, federated learning also needs to address non-iID data that is not independently similar.

Federated Learning vs Edge Computing

Federated learning provides learning protocols for coordination and security, so it can also be considered an operating system for edge computing.

Federated Learning vs Federated Database Systems

Federated database systems integrate multiple database units and manage the integrated system as a whole. The concept of a federated database was proposed to interoperability between multiple independent databases with distributed storage in database cells, and heterogeneous operations on data within each cell. Therefore, federated database systems and federated learning have many similarities in the type and storage of data.

However, the federated database system does not involve any privacy protection mechanism in the process of integrating the database, and all the database units are completely visible to the management system. That is, federated database systems focus on the basic operations of data (such as insert, delete, search, and merge), while federated learning is about creating a federated model while protecting data privacy. Therefore, the various values and laws contained in data provide us with better services.

Applications

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
innovative Adj. Innovation intellectual property rights Intellectual property rights
personalized Adj personal preference Individual be fond of
characteristic Characteristics of n. hinder [C]
heterogeneity The heterogeneity of n. mutual Adj. Common
limitation Limit n ecosphere N. ecosystem
borrowing Lending to n. loan N. loans
collapse V. collapse symptom N. clinical symptoms
envisage Imagine vision [C]
pivotal Adj. Key

{% endhideToggle %}

Federal study is an innovative modeling mechanism, can be based on data from different parties combined training model, and do not leak the data privacy and security, in sales, finance and other (for intellectual property rights, privacy and data security reasons, data cannot be directly together to training machine learning model) is a promising industry.

Federal learning can be used for Smart Retail. The purpose of smart retail is to use machine learning technology to provide personalized services to consumers, mainly including product recommendation and sales services. The data features of smart retail are as follows:

  • User purchasing power – can be inferred from bank deposits
  • User preferences – analyzed from users’ social networks
  • Product characteristics – usually recorded in online stores

This data is typically stored in three different departments or enterprises.

In this case, we face two problems:

  • Due to the protection of data privacy and security, it is difficult to break down the data barriers between banks, social networking sites and online shopping sites. Therefore, the data cannot be directly aggregated to train the model.
  • The data stored in the three places is usually uneven, and traditional machine learning cannot directly process such heterogeneous data.

These problems are easily solved with federated learning and transfer learning.

  • With federated learning, we can build machine learning models without exposing enterprise data. This not only fully protects data privacy and data security, but also provides users with personalized and targeted services, and thus realizes many benefits.
  • At the same time, we can use transfer learning to locate data heterogeneity problems and break the limitations of traditional artificial intelligence techniques.

Therefore, federated Learning provides us with a good technical support to build a cross-enterprise, cross-data, cross-domain ecosystem for big data and ARTIFICIAL intelligence.

Federated learning frameworks can be used to Multi-party Database Querying without Exposing the data. In the case of finance, we are interested in detecting multi-party borrowing, which is usually a major risk factor for the banking industry, when some malicious user borrows money from one bank to pay for loans from another. Multiple lending is a threat to financial stability, and large amounts of this illegal activity can lead to the collapse of the entire financial system.

Bank A and Bank B can use A federated learning framework to find such users without disclosing their user lists to each other. In particular, we can use an encryption mechanism to encrypt each participant’s user list and add the intersection of the encrypted list to the federation. The resulting decoding gives a list of lenders from multiple parties, and does not expose participants’ own “good” users to other participants. We can see that this operation is related to vertical federated learning.

Federal learning can also be used for Smart Healthcare.

Federated Learning and Data Alliance of Enterprises

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
alliance Alliance of n. paradigm Sample n.
equitable Fair adj regardless of Regardless, regardless
carry out implementation

{% endhideToggle %}

With the help of consensus mechanism in blockchain technology, federated learning forms rules for the equitable allocation of profits.

Conclusions and Prospects

{% hideToggle vocabulary %}

vocabulary explain vocabulary explain
bonus N. dividend

{% endhideToggle %}