cephadmThe installationceph v15To prepare

Refer to the installation documentation, you can directly leave a message, we communicate, the following is the deployment of the more basic things

https://docs.ceph.com/en/latest/cephadm/
http://www.dtmao.cc/news_show_1027422.shtml
Copy the code

Three machines of ali cloud were used for testing, each of which mounted a 40G data disk, and the system disk was also 40G, and the system was centos7.8

  • Three servers
hostname IP
node1 172.16.2.186
node2 172.16.2.184
node3 172.16.2.184
  • The installationdocker

Refer to www.runoob.com/docker/cent…

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun
Copy the code
  • Setting hostname and modifying hosts (Each server requires setting hostname and modifying hosts files)

Set the hostname

Hostnamectl set-hostname node1Copy the code

Modify hosts(as shown below)

vim /etc/hosts
Copy the code

  • Disabling a Firewall
systemctl stop firewalld
systemctl disable firewalld
Copy the code

Next, start our installation journey!

To obtaincephadm

Cephadm is just a python3 script

Connect to the node1

wget https://raw.githubusercontent.com/ceph/ceph/octopus/src/cephadm/cephadm
#If wget cannot access the file directly, the browser can copy the file contents into a new file cephadm

#Grant Execution permission
chmod +x cephadm
Copy the code

Obtaining software Packages

  • Although stand-alone scripts are sufficient to start the cluster, Cephadm is convenient for installing commands on the host. To install a package that provides the Cephadm Octopus distribution commands, run the following command
./cephadm add-repo --release octopus
./cephadm install
Copy the code
  • Cephadm confirms that the PATH is now in your PATH by running the following command which:
which cephadm
Copy the code
  • A successful command returns the following: which cephadm
/usr/sbin/cephadm
Copy the code

Boot a new cluster

# 172.16.2.186 = IP cephadm bootstrap --mon-ip 172.16.2.186Copy the code

After the installation is complete, a dashboard page is displayed. You can access the dashboard using a public IP address to ensure that the ports of the security group are accessible

Add a host to a cluster

To add each new host to the cluster, perform two steps:

Install the cluster’s public SSH key in the authorized_keys file at the root of the new host:

ssh-copy-id -f -i /etc/ceph/ceph.pub root@*<new-host>*
Copy the code

Such as:

ssh-copy-id -f -i /etc/ceph/ceph.pub root@node1
ssh-copy-id -f -i /etc/ceph/ceph.pub root@node2
Copy the code

Tell Ceph that a new node is part of a cluster:

ceph orch host add *newhost*
Copy the code

Such as:

ceph orch host add node1
ceph orch host add node1
Copy the code
  1. Convert disks into OSD nodes

There are several ways to create a new OSD:

Tell Ceph to use any available and unused storage devices:

ceph orch apply osd --all-available-devices
Copy the code

Create an OSD from a specific device on a specific host:

ceph orch daemon add osd *<host>*:*<device-path>*
Copy the code

Such as:

ceph orch daemon add osd node1:/dev/vdb
Copy the code

Note that the disk must be clean and unpartitioned. If it is already in use, refer to the following method

After deploying ceph, the Cephadm shell goes inside the container and erases the disk

# Zap (erase!) a device so it can be re-used
orch device zap <hostname> <path> [--force]
Copy the code

example

ceph orch device zap node1 /dev/sdb --force
Copy the code

The deployment of RGW

Make sure thatceph -sThe status ofHEALTH_OK

Cephadm deploys RadosGW as a collection of daemons that manage specific domains and regions. (For more information on domains and regions, see Multi-Site.)

Note that with Cephadm, the Radosgw daemon is configured through the monitor to configure the database, not through ceph.conf or the command line. If this configuration is not already in place (usually in this client.rgw.< realmName >.

section), the Radosgw daemon will start with the default Settings (for example, bind to port 80).

For example, to deploy three RGW daemons serving the MyORG domain and US-East-1 region on node1, node2, and Node3:

#If you have not already created a realm, create one first:
radosgw-admin realm create --rgw-realm=myorg --default

#Next create a new zone group:
radosgw-admin zonegroup create --rgw-zonegroup=default --master --default

#Next create a region:
radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default

#Deploy a set of RadosGW daemons for specific domains and regions:
ceph orch apply rgw myorg cn-east-1 --placement="3 node1 node2 node3"
Copy the code

And when we do that, we canceph -ssee

Next, let’s create a user for RGW and a dashboard to enable RGW

Reference documents blog.csdn.net/qq_40017427…

Create user radosgw-admin user create --uid= RGW --display-name= RGW --system Set -rgw-api-access-key = key; / / set-rgw-api-access-key = key; / / set-rgw-api-access-key = key; / / radosgw-admin user info --uid= RGW ZW1Y5IWDTB7K85P32H2A ceph dashboard set - RGW - API - the secret - key aehFzwMypF4V8Bm8A3baevHonEmu4E9a4oLZ1umh # disable SSL can disable to disable the ceph False # Enable RGW ceph dashboard set-rgw-api-host node ceph dashboard set-rgw-api-port 80 ceph dashboard set-rgw-api-scheme http ceph dashboard set-rgw-api-user-id rgw radosgw-admin period update --commitCopy the code

See the reference documentation for verification

You can restart all three servers after completing step 🙅

Ceph object storage (cepH object storage) ¶ Use python to access the cepH object storage. If you want to access the cepH object storage, you can use HTML tags to access the cepH object storage.

Reference documentation blog.csdn.net/nslogheyang…