Ceph: Difference between revisions
Line 163: | Line 163: | ||
==== On a 10G ethernet client ==== | ==== On a 10G ethernet client ==== | ||
===== | ===== Write (16 concurrent operations) ===== | ||
* Pool replication size : 1 | |||
{{Term|location=frontend|cmd=<code class="command">rados</code> -c ~/.ceph/config --id <code class="replace">pmorillo</code> --pool <code class="replace">pmorillo_bench</code> bench 60 write --no-cleanup}} | {{Term|location=frontend|cmd=<code class="command">rados</code> -c ~/.ceph/config --id <code class="replace">pmorillo</code> --pool <code class="replace">pmorillo_bench</code> bench 60 write --no-cleanup}} | ||
Maintaining 16 concurrent writes of 4194304 bytes for at least 60 seconds. | |||
Object prefix: benchmark_data_paranoia-3.rennes.grid5000.fr_5626 | |||
sec Cur ops started finished avg MB/s cur MB/s last lat avg lat | |||
0 0 0 0 0 0 - 0 | |||
1 16 146 130 519.915 520 0.488794 0.103537 | |||
2 16 247 231 461.933 404 0.164705 0.126076 | |||
3 16 330 314 418.612 332 0.036563 0.148763 | |||
4 16 460 444 443.941 520 0.177378 0.141696 | |||
... | |||
Total time run: 60.579488 | |||
Total writes made: 5965 | |||
Write size: 4194304 | |||
Bandwidth (MB/sec): 393.863 | |||
Stddev Bandwidth: 83.789 | |||
Max bandwidth (MB/sec): 520 | |||
Min bandwidth (MB/sec): 0 | |||
Average Latency: 0.162479 | |||
Stddev Latency: 0.195071 | |||
Max latency: 1.7959 | |||
Min latency: 0.033313 | |||
===== | ===== Read ===== | ||
{{Term|location=frontend|cmd=<code class="command">rados</code> -c ~/.ceph/config --id <code class="replace">pmorillo</code> --pool <code class="replace">pmorillo_bench</code> bench 60 seq}} | {{Term|location=frontend|cmd=<code class="command">rados</code> -c ~/.ceph/config --id <code class="replace">pmorillo</code> --pool <code class="replace">pmorillo_bench</code> bench 60 seq}} |
Revision as of 14:52, 19 January 2015
Introduction
Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/
Ceph architecture
Grid'5000 Deployment
Sites | Size | Configuration | Rados | RBD | CephFS | RadosGW |
---|---|---|---|---|---|---|
Rennes | ~ 9TB | 16 OSDs on 4 nodes | ![]() |
![]() |
![]() |
![]() |
Configuration
Generate your key
In order to access to the object store you will need a Cephx key. See : https://api.grid5000.fr/sid/storage/ceph/ui/
Your key will also available from the frontends :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
Note : Replace jdoe
by your login.
Create/Update/Delete Ceph pools
Requierement : Generate your key
Manage your Ceph pools from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/
Currently each users are limited to 3 pools.
Replication size
- 1 : No replication (not secured, most efficient for write operations)
- n : One primary object + n-1 replicas (more security, less efficient for write operations)
![]() |
Note |
---|---|
See http://ceph.com/papers/weil-thesis.pdf (page 130) |
You can edit the replication size by clicking on the replication size value. If the new value is greater than the old value, objects will automatically be replicated. Depending the amount of objects this operation will stress the Ceph cluster and take more or less time.
Quota
Quota is currently limited to 2TB per pools. You can change this quota at any time by clicking on the quota value, so prefer to use a small/realistic quota first.
Expiration date
The expiration date is limited to 2 months. You can extend the expiration date at any time by clicking on the expiration date value.
Configure Ceph on clients
On a deployed environment
Create a ceph configuration file /etc/ceph/ceph.conf
:
[global] mon initial members = ceph0,ceph1,ceph2 mon host = 172.16.111.30,172.16.111.31,172.16.111.32
Create a ceph keyring file /etc/ceph/ceph.client.
jdoe
.keyring
with your keyring :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
On the frontend or a node with production environment
![]() |
Note |
---|---|
The ceph version on frontend and production environment is old. Object Store access works, but not the support of RBD in Qemu/KVM. |
Create a ceph configuration file ~/.ceph/config
:
[global] mon initial members = ceph0,ceph1,ceph2 mon host = 172.16.111.30,172.16.111.31,172.16.111.32
Create a ceph keyring file ~/.ceph/ceph.client.
jdoe
.keyring
with your keyring :
[client.jdoe] key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
Usage
Rados Object Store access
Requierement : Create a Ceph pool • Configure Ceph on client
From command line
![]() |
Note |
---|---|
Add |
Put an object into a pool
List objects of a pool
Get object from a pool
Remove an object
Usage informations
pool name category KB objects clones degraded unfound rd rd KB wr wr KB pmorillo_objectstore - 1563027 2 0 0 0 0 0 628 2558455 total used 960300628 295991 total avail 7800655596 total space 9229804032
From your application (C/C++, Python, Java, Ruby, PHP...)
See : http://ceph.com/docs/master/rados/api/librados-intro/
Rados benchmarks
On a 1G ethernet client
Write (16 concurrent operations)
Read
On a 10G ethernet client
Write (16 concurrent operations)
- Pool replication size : 1
Maintaining 16 concurrent writes of 4194304 bytes for at least 60 seconds. Object prefix: benchmark_data_paranoia-3.rennes.grid5000.fr_5626 sec Cur ops started finished avg MB/s cur MB/s last lat avg lat 0 0 0 0 0 0 - 0 1 16 146 130 519.915 520 0.488794 0.103537 2 16 247 231 461.933 404 0.164705 0.126076 3 16 330 314 418.612 332 0.036563 0.148763 4 16 460 444 443.941 520 0.177378 0.141696 ... Total time run: 60.579488 Total writes made: 5965 Write size: 4194304 Bandwidth (MB/sec): 393.863 Stddev Bandwidth: 83.789 Max bandwidth (MB/sec): 520 Min bandwidth (MB/sec): 0 Average Latency: 0.162479 Stddev Latency: 0.195071 Max latency: 1.7959 Min latency: 0.033313
Read
RBD (Rados Block Device)
Requierement : Create a Ceph pool • Configure Ceph on client
Create a Rados Block Device
Create filesystem and mount RBD
id pool image snap device 1 jdoe_pool <rbd_name> - /dev/rbd1
Filesystem Size Used Avail Use% Mounted on /dev/sda3 15G 1.6G 13G 11% / ... /dev/sda5 525G 70M 498G 1% /tmp /dev/rbd1 93M 1.6M 85M 2% /mnt/rbd
Resize, snapshots, copy, etc...
See :
- http://ceph.com/docs/master/rbd/rados-rbd-cmds/
- http://ceph.com/docs/master/man/8/rbd/#examples
- http://ceph.com/docs/master/rbd/rbd-snapshot/
- rbd -h
QEMU/RBD
Requierement : Create a Ceph pool • Configure Ceph on client
Convert a qcow2 file into RBD
debian7