Ceph: Difference between revisions

From Grid5000
Jump to navigation Jump to search
No edit summary
No edit summary
Line 32: Line 32:
|}
|}


= Usage =
= Configuration =


== Generate your key ==
== Generate your key ==
Line 47: Line 47:


== Create Ceph pool ==
== Create Ceph pool ==
''Need :'' [[Ceph#Generate_your_key|Generate your key]]


Manage your pool from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/
Manage your pool from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/
Line 62: Line 64:


  [global]
  [global]
  mon initial members = ceph0,ceph1,ceph2
  mon initial members = ceph0,ceph1,ceph2
  mon host = 172.16.111.30,172.16.111.31,172.16.111.32
  mon host = 172.16.111.30,172.16.111.31,172.16.111.32


Create a ceph keyring file <code class='file'>/etc/ceph/ceph.client.</code><code class='replace'>jdoe</code><code class='file'>.keyring</code> with your keyring
Create a ceph keyring file <code class='file'>/etc/ceph/ceph.client.</code><code class='replace'>jdoe</code><code class='file'>.keyring</code> with your keyring :
  [client.jdoe]
  [client.jdoe]
   key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
   key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==
= Usage =
== Rados access ==
''Need :'' [[Ceph#Create_Ceph_pool|Create a Ceph pool]]
=== From command line ===
==== Put an object into a pool ====
node # rados --id jdoe --pool jdoe_pool put <infile> <objectname>
==== List object of a pool ====
node # rados --id jdoe --pool jdoe_pool ls
==== Get object from a pool ====
node # rados --id jdoe --pool jdoe_pool get <objectname> <outfile>
==== Remove an object ====
node # rados --id jdoe --pool jdoe_pool rm <objectname>
=== From your application code (C/C++, Python, Java, Ruby, ...) ===
'''See :''' http://ceph.com/docs/master/rados/api/librados-intro/
== RBD (Rados Block Device) ==
''Need :'' [[Ceph#Create_Ceph_pool|Create a Ceph pool]]
===
== QEMU/RBD ==

Revision as of 17:46, 9 December 2014


Introduction

Ceph is a distributed object store and file system designed to provide excellent performance, reliability and scalability. - See more at: http://ceph.com/

Architecture

File:Ceph stack.png

Grid'5000 Deployement

Sites Size Configuration Rados RBD CephFS RadosGW
Rennes 8,6 TB 16 OSDs on 4 nodes Check.png Check.png Fail.png Fail.png

Configuration

Generate your key

In order to access to the object store you will need a Cephx key. See : https://api.grid5000.fr/sid/storage/ceph/ui/

Your key will also available from the frontends :

# curl -k https://api.grid5000.fr/sid/storage/ceph/auths/jdoe.keyring
[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

Note : Replace jdoe by your login.

Create Ceph pool

Need : Generate your key

Manage your pool from the Grid'5000 Ceph frontend : https://api.grid5000.fr/sid/storage/ceph/ui/

Configure Ceph on clients

On a deployed environment

frennes # oarsub -I -t deploy
frennes # kadeploy3 -e ubuntu-x64-1404 -k -f $OAR_NODEFILE 
frennes # ssh root@node
node # apt-get install ceph

Create a ceph configuration file /etc/ceph/ceph.conf :

[global]
  mon initial members = ceph0,ceph1,ceph2
  mon host = 172.16.111.30,172.16.111.31,172.16.111.32

Create a ceph keyring file /etc/ceph/ceph.client.jdoe.keyring with your keyring :

[client.jdoe]
  key = AQBwknVUwAPAIRAACddyuVTuP37M55s2aVtPrg==

Usage

Rados access

Need : Create a Ceph pool

From command line

Put an object into a pool

node # rados --id jdoe --pool jdoe_pool put <infile> <objectname>

List object of a pool

node # rados --id jdoe --pool jdoe_pool ls

Get object from a pool

node # rados --id jdoe --pool jdoe_pool get <objectname> <outfile>

Remove an object

node # rados --id jdoe --pool jdoe_pool rm <objectname>

From your application code (C/C++, Python, Java, Ruby, ...)

See : http://ceph.com/docs/master/rados/api/librados-intro/


RBD (Rados Block Device)

Need : Create a Ceph pool

=

QEMU/RBD