Ceph: Difference between revisions
Line 171: | Line 171: | ||
=== Convert a qcow2 file into RBD === | === Convert a qcow2 file into RBD === | ||
{{Term|location=frennes|cmd=<code class="command">scp</code> /grid5000/virt-images/wheezy-x64-base.qcow2 root@<node>:/tmp}} | {{Term|location=frennes|cmd=<code class="command">scp</code> /grid5000/virt-images/wheezy-x64-base.qcow2 root@<code class="replace">node</code>:/tmp}} | ||
{{Term|location=node|cmd=<code class="command">apt-get</code> install qemu qemu-kvm}} | {{Term|location=node|cmd=<code class="command">apt-get</code> install qemu qemu-kvm}} | ||
{{Term|location=node|cmd=<code class="command">qemu-img</code> convert -f qcow2 -O raw /tmp/wheezy-x64-base.qcow2 rbd:<pool_name>/debian7:id=jdoe}} | {{Term|location=node|cmd=<code class="command">qemu-img</code> convert -f qcow2 -O raw /tmp/wheezy-x64-base.qcow2 rbd:<code class="replace">pool_name</code>/debian7:id=<code class="replace">jdoe</code>}} | ||
{{Term|location=node|cmd=<code class="command">rbd</code> --id jdoe --pool <pool_name> ls}} | {{Term|location=node|cmd=<code class="command">rbd</code> --id <code class="replace">jdoe</code> --pool <code class="replace">pool_name</code> ls}} | ||
debian7 | debian7 | ||
=== Start KVM virtual machine from a Rados Block Device === | === Start KVM virtual machine from a Rados Block Device === | ||
{{Term|location=node|cmd=<code class="command">screen kvm</code> -m 1024 -drive format=raw,file=rbd:<pool_name>/debian7:id=jdoe -nographic}} | {{Term|location=node|cmd=<code class="command">screen kvm</code> -m 1024 -drive format=raw,file=rbd:<code class="replace">pool_name</code>/debian7:id=<code class="replace">jdoe</code> -nographic}} |
Revision as of 11:44, 13 January 2015
Introduction
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/
Ceph architecture
Grid'5000 Deployment
Sites | Size | Configuration | Rados | RBD | CephFS | RadosGW |
---|---|---|---|---|---|---|
Rennes | ~ 9TB | 16 OSDs on 4 nodes | ![]() |
![]() |
![]() |
![]() |
Configuration
Generate your key
In order to access to the object store you will need a Cephx key. See : https://api.grid5000.fr/sid/storage/ceph/ui/
Your key will also available from the frontends :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
Note : Replace jdoe
by your login.
Create/Update/Delete Ceph pool
Requierement : Generate your key
Manage your Ceph pools from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/
Configure Ceph on clients
On a deployed environment
Create a ceph configuration file /etc/ceph/ceph.conf
:
[global] mon initial members = ceph0,ceph1,ceph2 mon host = 172.16.111.30,172.16.111.31,172.16.111.32
Create a ceph keyring file /etc/ceph/ceph.client.
jdoe
.keyring
with your keyring :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
On the frontend or a node with production environment
![]() |
Note |
---|---|
The ceph version on frontend and production environment is old. Object Store access works, but not the support of RBD in Qemu/KVM. |
Create a ceph configuration file ~/.ceph/config
:
[global] mon initial members = ceph0,ceph1,ceph2 mon host = 172.16.111.30,172.16.111.31,172.16.111.32
Create a ceph keyring file ~/.ceph/ceph.client.
jdoe
.keyring
with your keyring :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
Usage
Rados Object Store access
Requierement : Create a Ceph pool • Configure Ceph on client
From command line
![]() |
Note |
---|---|
Add |
Put an object into a pool
List objects of a pool
Get object from a pool
Remove an object
Usage informations
pool name category KB objects clones degraded unfound rd rd KB wr wr KB pmorillo_objectstore - 1563027 2 0 0 0 0 0 628 2558455 total used 960300628 295991 total avail 7800655596 total space 9229804032
From your application (C/C++, Python, Java, Ruby, PHP...)
See : http://ceph.com/docs/master/rados/api/librados-intro/
RBD (Rados Block Device)
Requierement : Create a Ceph pool • Configure Ceph on client
Create a Rados Block Device
Create filesystem and mount RBD
id pool image snap device 1 jdoe_pool <rbd_name> - /dev/rbd1
Filesystem Size Used Avail Use% Mounted on /dev/sda3 15G 1.6G 13G 11% / ... /dev/sda5 525G 70M 498G 1% /tmp /dev/rbd1 93M 1.6M 85M 2% /mnt/rbd
Resize, snapshots, copy, etc...
See :
- http://ceph.com/docs/master/rbd/rados-rbd-cmds/
- http://ceph.com/docs/master/man/8/rbd/#examples
- http://ceph.com/docs/master/rbd/rbd-snapshot/
- rbd -h
QEMU/RBD
Requierement : Create a Ceph pool • Configure Ceph on client
Convert a qcow2 file into RBD
debian7