Ceph: Difference between revisions

From Grid5000
Jump to navigation Jump to search
Line 113: Line 113:


  node # modprobe rbd
  node # modprobe rbd
  node # rbd --id jdoe --pool jdoe_pool map
  node # rbd --id jdoe --pool jdoe_pool map <rbd_name>
node # rbd --id jdoe --pool jdoe_pool showmapped
id pool          image snap device   
1  jdoe_pool <rbd_name>  -    /dev/rbd1
node # mkfs.ext4 /dev/rbd1
node # mkdir /mnt/rbd
node # mount /dev/rbd1 /mnt/rbd
node # df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        15G  1.6G  13G  11% /
...
/dev/sda5      525G  70M  498G  1% /tmp
/dev/rbd1        93M  1.6M  85M  2% /mnt/rbd


===  
===  


== QEMU/RBD ==
== QEMU/RBD ==

Revision as of 18:23, 9 December 2014


Introduction

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/

Architecture

File:Ceph stack.png

Grid'5000 Deployement

Sites Size Configuration Rados RBD CephFS RadosGW
Rennes 8,6 TB 16 OSDs on 4 nodes Check.png Check.png Fail.png Fail.png

Configuration

Generate your key

In order to access to the object store you will need a Cephx key. See : https://api.grid5000.fr/sid/storage/ceph/ui/

Your key will also available from the frontends :

# curl -k https://api.grid5000.fr/sid/storage/ceph/auths/jdoe.keyring
[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

Note : Replace jdoe by your login.

Create Ceph pool

Need : Generate your key

Manage your pool from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/

Configure Ceph on clients

On a deployed environment

frennes # oarsub -I -t deploy
frennes # kadeploy3 -e ubuntu-x64-1404 -k -f $OAR_NODEFILE 
frennes # ssh root@node
node # apt-get install ceph

Create a ceph configuration file /etc/ceph/ceph.conf :

[global]
  mon initial members = ceph0,ceph1,ceph2
  mon host = 172.16.111.30,172.16.111.31,172.16.111.32

Create a ceph keyring file /etc/ceph/ceph.client.jdoe.keyring with your keyring :

[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

Usage

Rados access

Need : Create a Ceph poolConfigure Ceph on client

From command line

Put an object into a pool

node # rados --id jdoe --pool jdoe_pool put <infile> <objectname>

List object of a pool

node # rados --id jdoe --pool jdoe_pool ls

Get object from a pool

node # rados --id jdoe --pool jdoe_pool get <objectname> <outfile>

Remove an object

node # rados --id jdoe --pool jdoe_pool rm <objectname>

From your application code (C/C++, Python, Java, Ruby, ...)

See : http://ceph.com/docs/master/rados/api/librados-intro/


RBD (Rados Block Device)

Need : Create a Ceph poolConfigure Ceph on client

Create a Rados Block Device

node # rbd --id jdoe --pool jdoe_pool create --size <MB> <rbd_name>
node # rbd --id jdoe --pool jdoe_pool ls
node # rbd --id jdoe --pool jdoe_pool info <rbd_name>

Create filesystem and mount RBD

node # modprobe rbd
node # rbd --id jdoe --pool jdoe_pool map <rbd_name>
node # rbd --id jdoe --pool jdoe_pool showmapped
id pool          image snap device    
1  jdoe_pool <rbd_name>  -    /dev/rbd1 
node # mkfs.ext4 /dev/rbd1
node # mkdir /mnt/rbd
node # mount /dev/rbd1 /mnt/rbd
node # df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        15G  1.6G   13G  11% /
...
/dev/sda5       525G   70M  498G   1% /tmp
/dev/rbd1        93M  1.6M   85M   2% /mnt/rbd

=

QEMU/RBD