Ceph: Difference between revisions

From Grid5000
Jump to navigation Jump to search
Line 96: Line 96:


=== From command line ===
=== From command line ===
{{Note|text=Add <code class='command'>-c ~/.ceph/config</code> to rados CLI when using as user (not root) from the frontend or the production environment.}}


==== Put an object into a pool ====
==== Put an object into a pool ====

Revision as of 17:59, 8 January 2015


Warning.png Warning

Development in progress...

Introduction

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/

Ceph architecture

File:Ceph stack.png

Grid'5000 Deployment

Sites Size Configuration Rados RBD CephFS RadosGW
Rennes ~ 9TB 16 OSDs on 4 nodes Check.png Check.png Fail.png Fail.png

Configuration

Generate your key

In order to access to the object store you will need a Cephx key. See : https://api.grid5000.fr/sid/storage/ceph/ui/

Your key will also available from the frontends :

# curl -k https://api.grid5000.fr/sid/storage/ceph/auths/jdoe.keyring
[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

Note : Replace jdoe by your login.

Create/Update/Delete Ceph pool

Requierement : Generate your key

Manage your Ceph pools from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/

Ceph g5k frontend.png

Configure Ceph on clients

On a deployed environment

frennes # oarsub -I -t deploy
frennes # kadeploy3 -e ubuntu-x64-1404 -k -f $OAR_NODEFILE 
frennes # ssh root@node
node # apt-get install ceph

Create a ceph configuration file /etc/ceph/ceph.conf :

[global]
  mon initial members = ceph0,ceph1,ceph2
  mon host = 172.16.111.30,172.16.111.31,172.16.111.32

Create a ceph keyring file /etc/ceph/ceph.client.jdoe.keyring with your keyring :

[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

On the frontend or a node with production environment

Note.png Note

The ceph version on frontend and production environment is old. Object Store access works, but not the support of RBD in Qemu/KVM.

Create a ceph configuration file ~/.ceph/config :

[global]
  mon initial members = ceph0,ceph1,ceph2
  mon host = 172.16.111.30,172.16.111.31,172.16.111.32

Create a ceph keyring file ~/.ceph/ceph.client.jdoe.keyring with your keyring :

[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

Usage

Rados Object Store access

Requierement : Create a Ceph poolConfigure Ceph on client

From command line

Note.png Note

Add -c ~/.ceph/config to rados CLI when using as user (not root) from the frontend or the production environment.

Put an object into a pool

node # rados --id jdoe --pool jdoe_pool put <infile> <objectname>

List objects of a pool

node # rados --id jdoe --pool jdoe_pool ls

Get object from a pool

node # rados --id jdoe --pool jdoe_pool get <objectname> <outfile>

Remove an object

node # rados --id jdoe --pool jdoe_pool rm <objectname>

From your application (C/C++, Python, Java, Ruby, ...)

See : http://ceph.com/docs/master/rados/api/librados-intro/

RBD (Rados Block Device)

Requierement : Create a Ceph poolConfigure Ceph on client

Create a Rados Block Device

node # rbd --id jdoe --pool jdoe_pool create <rbd_name> --size <MB>
node # rbd --id jdoe --pool jdoe_pool ls
node # rbd --id jdoe --pool jdoe_pool info <rbd_name>

Create filesystem and mount RBD

node # modprobe rbd
node # rbd --id jdoe --pool jdoe_pool map <rbd_name>
node # rbd --id jdoe --pool jdoe_pool showmapped
id pool          image snap device    
1  jdoe_pool <rbd_name>  -    /dev/rbd1 
node # mkfs.ext4 /dev/rbd1
node # mkdir /mnt/rbd
node # mount /dev/rbd1 /mnt/rbd
node # df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sda3        15G  1.6G   13G  11% /
...
/dev/sda5       525G   70M  498G   1% /tmp
/dev/rbd1        93M  1.6M   85M   2% /mnt/rbd

Resize, snapshots, copy, etc...

See :


QEMU/RBD

Requierement : Create a Ceph poolConfigure Ceph on client

Convert a qcow2 file into RBD

frennes # scp /grid5000/virt-images/wheezy-x64-base.qcow2 root@<node>:/tmp
node # apt-get install qemu qemu-kvm
node # qemu-img convert -f qcow2 -O raw /tmp/wheezy-x64-base.qcow2 rbd:<pool_name>/debian7:id=jdoe
node # rbd --id jdoe --pool <pool_name> ls
debian7

Start KVM virtual machine from a Rados Block Device

node # screen kvm -m 1024 -drive format=raw,file=rbd:<pool_name>/debian7:id=jdoe -nographic