Ceph: Difference between revisions
No edit summary |
(→Usage) |
||
Line 75: | Line 75: | ||
== Rados access == | == Rados access == | ||
''Need :'' [[Ceph#Create_Ceph_pool|Create a Ceph pool]] | ''Need :'' [[Ceph#Create_Ceph_pool|Create a Ceph pool]] • [[Ceph#Configure_Ceph_on_clients|Configure Ceph on client]] | ||
=== From command line === | === From command line === | ||
Line 102: | Line 102: | ||
== RBD (Rados Block Device) == | == RBD (Rados Block Device) == | ||
''Need :'' [[Ceph#Create_Ceph_pool|Create a Ceph pool]] | ''Need :'' [[Ceph#Create_Ceph_pool|Create a Ceph pool]] • [[Ceph#Configure_Ceph_on_clients|Configure Ceph on client]] | ||
=== Create a Rados Block Device === | |||
node # rbd --id jdoe --pool jdoe_pool create --size <MB> <rbd_name> | |||
node # rbd --id jdoe --pool jdoe_pool ls | |||
node # rbd --id jdoe --pool jdoe_pool info <rbd_name> | |||
=== Create filesystem and mount RBD === | |||
node # modprobe rbd | |||
node # rbd --id jdoe --pool jdoe_pool map | |||
=== | === | ||
== QEMU/RBD == | == QEMU/RBD == |
Revision as of 18:09, 9 December 2014
Introduction
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/
Architecture
Grid'5000 Deployement
Sites | Size | Configuration | Rados | RBD | CephFS | RadosGW |
---|---|---|---|---|---|---|
Rennes | 8,6 TB | 16 OSDs on 4 nodes | ![]() |
![]() |
![]() |
![]() |
Configuration
Generate your key
In order to access to the object store you will need a Cephx key. See : https://api.grid5000.fr/sid/storage/ceph/ui/
Your key will also available from the frontends :
# curl -k https://api.grid5000.fr/sid/storage/ceph/auths/jdoe.keyring [client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
Note : Replace jdoe
by your login.
Create Ceph pool
Need : Generate your key
Manage your pool from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/
Configure Ceph on clients
On a deployed environment
frennes # oarsub -I -t deploy frennes # kadeploy3 -e ubuntu-x64-1404 -k -f $OAR_NODEFILE frennes # ssh root@node node # apt-get install ceph
Create a ceph configuration file /etc/ceph/ceph.conf
:
[global] mon initial members = ceph0,ceph1,ceph2 mon host = 172.16.111.30,172.16.111.31,172.16.111.32
Create a ceph keyring file /etc/ceph/ceph.client.
jdoe
.keyring
with your keyring :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
Usage
Rados access
Need : Create a Ceph pool • Configure Ceph on client
From command line
Put an object into a pool
node # rados --id jdoe --pool jdoe_pool put <infile> <objectname>
List object of a pool
node # rados --id jdoe --pool jdoe_pool ls
Get object from a pool
node # rados --id jdoe --pool jdoe_pool get <objectname> <outfile>
Remove an object
node # rados --id jdoe --pool jdoe_pool rm <objectname>
From your application code (C/C++, Python, Java, Ruby, ...)
See : http://ceph.com/docs/master/rados/api/librados-intro/
RBD (Rados Block Device)
Need : Create a Ceph pool • Configure Ceph on client
Create a Rados Block Device
node # rbd --id jdoe --pool jdoe_pool create --size <MB> <rbd_name> node # rbd --id jdoe --pool jdoe_pool ls node # rbd --id jdoe --pool jdoe_pool info <rbd_name>
Create filesystem and mount RBD
node # modprobe rbd node # rbd --id jdoe --pool jdoe_pool map