introduce

Application scenarios of SQL Server 2019

Breaking data silos through data virtualization, SQL Server Big data clusters can query external data sources without moving or replicating data by leveraging SQL Server PolyBase. SQL Server 2019 introduces new connectors to data sources.

Build a data lake in SQL Server. The SQL Server big data cluster includes a scalable HDFS storage pool. It can be used to store big data, which may come from multiple external sources. Once the big data is stored in HDFS in the big data cluster, you can analyze and query the data and use it with relational data.

Extended Data Mart, THE SQL Server Big data cluster provides scale-out computing and storage to improve the performance of analyzing any data. Data from a variety of sources can be ingested and distributed across datapool nodes as a cache for further analysis.

Combining artificial intelligence and machine learning, the SQL Server big data cluster can perform artificial intelligence and machine learning tasks on data stored in HDFS storage pools and data pools. You can use Spark and built-in AI tools in SQL Server, such as R, Python, Scala, or Java.

Application deployment, which allows users to deploy applications as containers to SQL Server big data clusters. These applications are published as Web services to be consumed by consumer applications. User-deployed applications have access to data stored in big data clusters and can be easily monitored

Creating a database

  1. First click New Query

  2. Enter the code

create database class
on
(name=class1_dat,
filename='d:\class1_dat.mdf',
size=10,
maxsize=50
)
log on
(name=class1_log,
filename='d:\class1_log.ldf',
size=4,
maxsize=25)

Copy the code



3. Running result



4. Subsequent

Want to know more knowledge, welcome to pay attention to the public number, like a point to see it.



You can follow my project column, there are more complete projects, welcome to subscribe.

This article uses the article synchronization assistant to synchronize