“If I could get more training data, my model accuracy would be much better”, “We should get more data through the API”, “the source data is so poor we can’t use it”.

These are some of the explanations or reasons many engineers give when their models don’t work.

Data is the basis of machine learning or analysis projects, and while there is more data available today than ever before, problems such as insufficient data or data type mismatches are not uncommon.

But how do you know if these problems are real or just excuses? In other words, how do you find out if data is a constraint on your project?

Then you need to find the data bottleneck!

Generally speaking, data can be adjusted from the following three aspects:

  • Depth: Increases the number of data points

  • Breadth: Increasing the diversity of data sources

  • High quality: Consolidate messy data!

One: work hard from the data depth

This approach does not change the data structure, but adds data points.

You can’t always control data points (e.g., you can’t easily add users), but you can always control at least some aspects of that point.

There are several different situations in which increasing the amount of data is useful.

1. A/B test or experiment

If you are running an experiment, you need enough data points to make the results statistically significant. The number of data points you need is also affected by other factors, such as the error range, confidence interval, and variance of the distribution. For each experiment to be performed, there is a minimum amount of data threshold: if this threshold is reached, you can proceed to the next step, since adding more data points will not do any good. Otherwise, the data will become the bottleneck of the experiment. The following blog post gives a good overview:

Towardsdatascience.com/how-do-you-…

Prediction accuracy in machine learning

If you are running a prediction model, the prediction accuracy will increase as the amount of data increases, but the accuracy will reach a “saturation” point. How do you find out if such a point has been reached? You can retrain the model with a different number of training data points and then plot the prediction accuracy based on the amount of data. If the curve does not flatten, the model may benefit further from the additional data.

Source: ResearchGate by Kim and Park

www.researchgate.net/publication…

3. Empower deep learning

While traditional machine learning models can operate with smaller amounts of data, the more complex the model, the more data it needs, and ultimately, deep learning models cannot operate without a large amount of data to back them up. For machine learning models, big data is a requirement, not a good way to improve performance.

4. Analysis and thinking

Even if you don’t use the data to make a prediction, but want to enrich your report or perform an analysis to confirm your decision, the volume of data can still be a forecasting bottleneck. But if your data has a lot of heterogeneity and you can analyze it at different levels of granularity, increasing the amount of data is the right thing to do. For example, if you have a large sales force selling a very broad range of products, each salesperson may only sell a subset of the products. If you want to compare their ability to sell a particular product, you probably can’t.

Two: work hard from the breadth of data

Diversity of data is key, but it is also often overestimated empirically.

I had a job at a startup that used machine learning to predict home prices. Our strategic advantage is that we have a wide variety of data, so we can integrate all possible data sources to help forecast real estate.

The key to improve the prediction ability of the model is to determine which data resources to obtain.

How do you evaluate the costs and benefits of acquiring new data?

We need to evaluate the benefits of the new data in two key points: how the new data relates to the target variable we are trying to predict (hopefully as much as possible) and how the new data relates to the existing data (hopefully as little as possible). However, this is not easy to quantify, but some qualitative judgments can help us sift out the best new data for us.

The cost of evaluating new data can be viewed as the total cost of owning the data. Sometimes there are real costs to buying data or paying for apis, but that’s only part of the story. The following factors are often the most important to consider:

  • One-time acquisition vs. repeated acquisition

  • Complexity of data transformation and storage

  • Data quality and data cleansing requirements

  • Data processing and parsing

Three: improve data quality

Harvard professor Xiao-Li Meng once gave an inspiring speech in which he said that “the quality of data is far more important than the quantity of data”.

The beauty of this talk was that he was able to mathematically quantify this statement, looking at statistical measures of data quality or quantity.

Watch the talk at www.youtube.com/watch?v=8YL…

My business experience reflects this: companies often acquire or combine more data without first analyzing whether the data they already have is sufficient.

Data quality is often an issue, and a big one. This problem can be caused by manual input errors, inaccuracies in the raw data, problems in the aggregation or processing layer, data loss over a period of time, etc.

Improving data quality is a time-consuming and boring task, but it can also yield the most beneficial results.

Four:

If there is a data bottleneck in a poorly performing model, try to figure out where the bottleneck is. To sum up, we can start from the following three aspects:

First, data-volume problems can often be identified by simple statistical significance or accuracy curves. If that’s not the problem, move on to the next step.

Second, in my experience, the diversity of the data we have is often exaggerated, not because the new data is useless, but because the new data source may already contain information captured in some way, especially if there is already a relatively rich data set.

Third, data quality is key, and it’s far better to focus on smaller, cleaner data sets than on bigger, messier ones.

Via towardsdatascience.com/do-you-have…


AI Institute has made contact with ali Entertainment, Megsight, Sogou search, Xiaomi and other well-known companies to help you better find a job, send your resume to HR background with one click, and prepare some internal promotion channels.

Welcome to join aiyanxishe (Primary school girls). You are welcome to join aiyanxishe.