Monday, April 17, 2017

ceph storage configuration introduction

**** setup info
- admin pc : ip: 192.168.58.42, name:anode
- mon pc: ip 192.168.58.151, name cnode1
- osd0 pc: ip 192.168.58.152, name: cnode2, add another disk /dev/sdb
- osd1 pc: ip 192.168.58.153, name: cnode3, add another disk /dev/sdb
- client pc: ip: 192.168.58.43, name cclient1

1-  preparation
for most parts, we use ceph-deploy to automate all the configuration. so it is import to prepare the authentication from admin pc to other 4 pc.

- on admin pc install ceph and ceph-deploy
$ sudo apt-get install ceph ceph-deploy

- on other others pc, create user cephdd and allow it to sudo to root without password
$ sudo useradd -m cephdd
$ sudo passwd cephdd

then add a file named cephdd to /etc/sudoers.d/ with the following content
cephdd ALL = (root) NOPASSWD:ALL

then on admin pc we use ssh-copy-id to all those 4 pc
$ ssh-copy-id cephdd@cnode1
$ ssh-copy-id cephdd@cnode2
$ ssh-copy-id cephdd@cnode3
$ ssh-copy-id cephdd@cclient1

then update file ~/.ssh/config as below
Host cnode1
   Hostname cnode1
   User cephdd
Host cnode2
   Hostname cnode2
   User cephdd
Host cnode3
   Hostname cnode3
   User cephdd
Host anode
   Hostname anode
   User cephdd
Host cclient1
   Hostname cclient1
   User cephdd

2- add mon pc to ceph
$ mkdir deployceph
$ cd deployceph
$ ceph-deploy new cnode1
$ ceph-deploy install anode cnode1 cnode2 cnode3 cclient1
$ ceph-deploy mon create-initial
- list mon status
$ ceph mon stat

3- prepare osd1 and osd0 and osd1
$ ssh cnode2
- create partition for disk /dev/sdb with fdisk then make filesystem btrfs
$ sudo mkfs.btrfs /dev/sdb1
$ sudo mkdir /ceph/
$ sudo mount /dev/sdb1 /ceph
$ sudo mkdir /ceph/osd0
$ sudo setfacl -R -m user:ceph:rwx /ceph/

then osd1is all the same
$ ssh cnode2
- create partition for disk /dev/sdb with fdisk then make filesystem btrfs
$ sudo mkfs.btrfs /dev/sdb1
$ sudo mkdir /ceph/
$ sudo mount /dev/sdb1 /ceph
$ sudo mkdir /ceph/osd0
$ sudo setfacl -R -m user:ceph:rwx /ceph/ 

4- add both osd and activate it
$ ceph-deploy osd prepare  cnode3:/ceph/osd1 cnode2:/ceph/osd0
$ ceph-deploy osd activate cnode3:/ceph/osd1 cnode2:/ceph/osd0
$ sudo ceph -W
$ ceph-deploy admin anode cnode1 cnode2 cnode3 cclient1 
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

- check ceph health
$ ceph health
HEALTH_OK


- add metadata server
$ ceph-deploy mds create cnode1
- add rgw instance
$ ceph-deploy rgw create cnode1


*********************client ***************
***access by object data
1- store and download myfile file
-create pool
$ rados mkpool data
$ touch myfile

- upload file
$ rados put stored-myfile myfile --pool=data
- list data in the pool
$ rados -p data ls
- download data back
$ rados get stored-myfile myfile --pool=data

- list all pool in osd
$ ceph osd pool ls


*** access by object block
1- on ceph client  create object block name drive1 size 4G
$ sudo rbd create drive1 --size 4096 -k /path/to/ceph.client.admin.keyring

- map it to local system
$ sudo rbd map drive1
- if you can not map it, disable some feature that the kernel might not support with the follow command
$ sudo rbd feature disable drive1 deep-flatten,fast-diff,object-map

3- list object block and get it info
$ sudo rbd list
$ rbd status drive1
$ rbd info drive1


2- after we map successfully, it will create a device /dev/rbd0
so we can make file system and  mount it like it is a local block device
$ mount | grep rbd0
/dev/rbd0 on /mnt type ext4 (rw,relatime,stripe=1024,data=ordered)





*** access by ceph filesystem
1- create two pools, one for store data another for storing metadata
$ ceph osd pool create cephfs_data 1
$ ceph osd pool create cephfs_metadata 2

- create file system named cephfs
$ ceph fs new cephfs cephfs_metadata cephfs_data 
- list pg, list file system and list mds
$ ceph pg ls-by-pool data
$ ceph pg ls-by-pool cephfs_data

$ ceph fs ls
$ ceph mds stat


2- create admin.secretfile by using key from file cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
    key = AQB/TfRYp6LmNRAAGLSuBlKCPa6hVGnghlz93g==
and in admin.secretfile file consist only
AQB/TfRYp6LmNRAAGLSuBlKCPa6hVGnghlz93g==

3- mount it to local directory
$ sudo mount.ceph cnode1:/ /mnt/mycephfs/ -o name=admin,secretfile=/home/user1/admin.secret 

or
$ sudo mount -t ceph 192.168.58.151:/ /mnt/mycephfs/ -o name=admin,secretfile=/home/user1/admin.secret

$ df -h
Filesystem        Size  Used Avail Use% Mounted on
udev              474M     0  474M   0% /dev
tmpfs             100M  5.8M   94M   6% /run
/dev/sda1          79G  3.8G   71G   6% /
tmpfs             496M     0  496M   0% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs             496M     0  496M   0% /sys/fs/cgroup
tmpfs             100M     0  100M   0% /run/user/1000
192.168.58.151:/  160G   15G  146G   9% /mnt/mycephfs






No comments:

Post a Comment