You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description of problem:
We have a glusterfs , the data volume is 700Mo, the .glusterfs occupied 2.6 TB . How can /Could clean up the .glusterfs directory to release more space ?
The exact command to reproduce the issue:
The full output of the command that failed:
Expected results:
Mandatory info: - The output of the gluster volume info command:
- The output of the gluster volume status command:
- The output of the gluster volume heal command:
**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
Additional info:
- The operating system / glusterfs version:
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered:
which directory under .glusterfs is having more data? Please share the details. Was the brick path part of any other volume before creating this volume?
Description of problem:
We have a glusterfs , the data volume is 700Mo, the .glusterfs occupied 2.6 TB . How can /Could clean up the .glusterfs directory to release more space ?
The exact command to reproduce the issue:
The full output of the command that failed:
Expected results:
Mandatory info:
- The output of the
gluster volume info
command:- The output of the
gluster volume status
command:- The output of the
gluster volume heal
command:**- Provide logs present on following locations of client and server nodes -
/var/log/glusterfs/
**- Is there any crash ? Provide the backtrace and coredump
Additional info:
- The operating system / glusterfs version:
Note: Please hide any confidential data which you don't want to share in public like IP address, file name, hostname or any other configuration
The text was updated successfully, but these errors were encountered: