Network ports

  • These are the tcp ports to open in your firewall
Port Description
616 GlusterFS
38465 GlusterFS
38466 GlusterFS
38468 GlusterFS
38469 GlusterFS
24007 GlusterFSd
49153 Bricks


  • Install GlusterFS repository
  • Install packages
yum install glusterfs{,-server,-fuse,-geo,-replication}
  • Start the service
service glusterd start


  • Make a server trust another Gluster node
gluster peer probe <ip>
  • Lets asume you have a partition mounted to /export/test to distribute with GlusterFS on node1 and node2
  • Setup a volume
gluster volume create test replica 2 node1:/export/test node2:/export/test
gluster volume start test
  • Now you can mount and use the volume
mount -t glusterfs node1:/export/test /mnt
  • For redundant mount insert the following into your /etc/fstab
$GFS1_NODE:/export/test /mnt glusterfs defaults,_netdev,backupvolfile-server=$GFS2_NODE 0 0


  • Add a new one
gluster peer probe <ip>
  • Show status
gluster peer status
  • Remove one
gluster peer detach <ip>


  • Create a new one
gluster volume create test replica 2 node1:/export/test node2:/export/test
gluster volume start test
  • List all volumes
gluster volume status
  • Remove one
gluster volume remove-brick test node1:/export/test node2:/export/test
gluster volume stop test
gluster volume remove test
  • Add a new disk to a volume
gluster volume add-brick <volname> replica 2 node3:/export/moretest
  • Manage access by ip
gluster volume set testvol auth.allow
# or
gluster volume set testvol auth.allow all
gluster volume set testvol auth.reject 192.168.10.*
  • How many space to reserve for logs / meta data?
gluster volume set cluster.min-free-disk 5%
  • Enable self healing (on by default)
gluster volume set cluster.self-heal-daemon on

NFS export

  • Start rpcbind
  • Start nfslock (rpcstatd)
  • Start glusterd
  • Adjust firewall
Port Description
2049 GlusterFS (NFS)
111 RPCbind
54539 RCP statd
38003 RPCbind
  • Now you can mount it with
mount -t nfs -o mountproto=tcp,vers=3 ip1:/testme /mnt


gluster volume quota <volname> enable
gluster volume quota <volname> limit-usage <directory> 10GB
gluster volume quota <volname> list
gluster volume quota <volname> remove <directory>

Performance tuning

  • Performance information
gluster volume top <volname> read-perf
gluster volume top <volname> write-perf
  • Profiling
gluster volume profile <volname> start
gluster volume profile <volname> info
gluster volume profile <volname> stop
  • Setting read cache size (default 32MB)
gluster volume set <volname> performance.cache-size 256MB
  • Stripe block size
gluster volume set cluster.stripe-block-size 128KB
  • I/O threads
gluster volume set 32


  • requested NFS version or transport protocol is not supported -> you try to mount with UDP or you didnt start rpcbind, nfslock, glusterd in the right order
  • Protocol not supported -> you try to mount with version 4 instead of 3
  • node is already part of another cluster -> delete /var/lib/glusterd/peers/*
  • split brain means that we detected changes to both replicas
gluster volume heal <volname> full
gluster volume heal <volname> info
  • `{path} or a prefix of it is already part of a volume ` -> you forgot to remove the brick before deleting the volume
setfattr -x trusted.glusterfs.volume-id $brick_path
setfattr -x trusted.gfid $brick_path
rm -rf $brick_path/.glusterfs
service glusterd restart