Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Epic] Command to resize the disk #127

Closed
jmazzitelli opened this issue May 13, 2019 · 57 comments
Closed

[Epic] Command to resize the disk #127

jmazzitelli opened this issue May 13, 2019 · 57 comments

Comments

@jmazzitelli
Copy link

jmazzitelli commented May 13, 2019

The initial size of the image is 15+1GB. This might not be enough for all usecases, so we need to find a way to allow the image to be resized on initial start (or when stopped). Note: Likely the command(s) will share functionality.

eg.

# crc start --disk-size 20GB

or

# crc config disk-size +5GB

(submitting this issue as per Slack response from Gerard Braad : "at the moment we do not support resizing of the disk image. please file an issue fotr this.")

I tried to install Maistra (Isito), its bookinfo demo, and Kiali, but am hitting disk resource limitations.

CRC should provide a mechanism to define the amount of disk space to assign to the VM image in order to run larger clusters.

@jmazzitelli
Copy link
Author

jmazzitelli commented May 15, 2019

Using qemu-img resize and virt-resize, I was able to make the crc disk image 40G putting the extra on /dev/sda3. Here's what the crc machine image reports:

$ sudo virt-filesystems --long -h --all -a $HOME/.crc/machines/crc/crc 
Name       Type        VFS      Label  MBR  Size  Parent
/dev/sda1  filesystem  unknown  -      -    1.0M  -
/dev/sda2  filesystem  ext4     boot   -    1.0G  -
/dev/sda3  filesystem  xfs      root   -    39G   -
/dev/sda1  partition   -        -      -    1.0M  /dev/sda
/dev/sda2  partition   -        -      -    1.0G  /dev/sda
/dev/sda3  partition   -        -      -    39G   /dev/sda
/dev/sda   device      -        -      -    40G   -

But when I log into the CRC VM, I see this:

[core@crc-jtskh-master-0 ~]$ sudo fdisk -l
Disk /dev/vda: 40 GiB, 42949672960 bytes, 83886080 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 17F91114-3EA4-46F3-A756-D8EA6DDBE05B

Device       Start      End  Sectors Size Type
/dev/vda1     2048     4095     2048   1M BIOS boot
/dev/vda2     4096  2101247  2097152   1G Linux filesystem
/dev/vda3  2101248 83883519 81782272  39G Linux filesystem

[core@crc-jtskh-master-0 ~]$ lsblk
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda    252:0    0  40G  0 disk 
├─vda1 252:1    0   1M  0 part 
├─vda2 252:2    0   1G  0 part /boot
└─vda3 252:3    0  39G  0 part /sysroot

[core@crc-jtskh-master-0 ~]$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs         7796036        0   7796036   0% /dev
tmpfs            7826572        0   7826572   0% /dev/shm
tmpfs            7826572     8988   7817584   1% /run
tmpfs            7826572        0   7826572   0% /sys/fs/cgroup
/dev/vda3       15715328 10646412   5068916  68% /sysroot
/dev/vda2         999320   138320    792188  15% /boot
tmpfs            1565312        0   1565312   0% /run/user/1000

Why does vda3 show as 39G (for /sysroot) from the lsblk command, but df shows vda3 (/sysroot) at still only 15G (15715328) which is at 68% used. This is with nothing installed in my cluster yet - this is just after a crc start

I'm going to install Istio and its bookinfo demo into my cluster and see what happens. I suspect /sysroot is going to fill up to 100% usage (or close to it).

@jmazzitelli
Copy link
Author

with base Istio installed:

[core@crc-jtskh-master-0 ~]$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs         7796036        0   7796036   0% /dev
tmpfs            7826572        0   7826572   0% /dev/shm
tmpfs            7826572    10540   7816032   1% /run
tmpfs            7826572        0   7826572   0% /sys/fs/cgroup
/dev/vda3       15715328 11819388   3895940  76% /sysroot
/dev/vda2         999320   138320    792188  15% /boot
tmpfs            1565312        0   1565312   0% /run/user/1000

There is no DiskPressure yet:

$ oc get node crc-jtskh-master-0  -o yaml | grep DiskPressure
    reason: KubeletHasNoDiskPressure
    type: DiskPressure

Then with its bookinfo demo installed:

[core@crc-jtskh-master-0 ~]$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
devtmpfs         7796036        0   7796036   0% /dev
tmpfs            7826572        0   7826572   0% /dev/shm
tmpfs            7826572    11172   7815400   1% /run
tmpfs            7826572        0   7826572   0% /sys/fs/cgroup
/dev/vda3       15715328 13467996   2247332  86% /sysroot
/dev/vda2         999320   138320    792188  15% /boot
tmpfs            1565312        0   1565312   0% /run/user/1000

Now I have disk pressure:

$ oc get node crc-jtskh-master-0  -o yaml | grep DiskPressure
    reason: KubeletHasDiskPressure
    type: DiskPressure

And now I can see it purging files to help get disk usage below the 80% threshold:

[core@crc-jtskh-master-0 ~]$ df
Filesystem     1K-blocks     Used Available Use% Mounted on
...
/dev/vda3       15715328 11928628   3786700  76% /sysroot
...

@jmazzitelli
Copy link
Author

So, clearly my problem is that the additional room on the partition isn't making its way into the vda3 filesystem. You can see I enlarged the vda3 partition:

from fdisk -l:

/dev/vda3 2101248 83883519 81782272 39G Linux filesystem

and the filesystem even shows it is 39G from the lsblk command:

└─vda3 252:3 0 39G 0 part /sysroot

but for some reason that additional space is not recognized at runtime. From df:

/dev/vda3 15715328 13467996 2247332 86% /sysroot

I increased my disk space using these commands - if anyone knows what I am missing, please let me know. Unless there is a bug somewhere in the VM config?? I'm at a loss.

$ CRC_MACHINE_IMAGE="$HOME/.crc/machine/crc/crc"
$ crc stop
$ sudo qemu-img resize ${CRC_MACHINE_IMAGE} +24G
$ sudo cp ${CRC_MACHINE_IMAGE} ${CRC_MACHINE_IMAGE}.ORIGINAL
$ sudo virt-resize --expand /dev/sda3 ${CRC_MACHINE_IMAGE}.ORIGINAL ${CRC_MACHINE_IMAGE}

@cfergeau
Copy link
Contributor

We can probably make the size of /dev/vda3 filesystem bigger on the image we ship. The image is thinly provisioned, so this should not have a noticeable impact on the tarball size.

@jmazzitelli
Copy link
Author

jmazzitelli commented May 15, 2019

FWIW: I do not have libguestfs-xfs installed and when I do virt-resize, I am getting this warning:

virt-resize: warning: unknown/unavailable method for expanding the xfs
filesystem on /dev/sda3

It is possible that this is why my filesystem is not getting the increased space. I will install that and test and see what happens.

@jmazzitelli
Copy link
Author

We can probably make the size of /dev/vda3 filesystem bigger on the image we ship. The image is thinly provisioned, so this should not have a noticeable impact on the tarball size.

It would be preferable if crc provided an option to configure the partition sizes (particularly the vda3 partition) and have crc run the necessary cmds to do this resizing. Because whatever size you ship with, someone is going to want more :)

@jmazzitelli
Copy link
Author

jmazzitelli commented May 15, 2019

I installed libguestfs-xfs and that got me alittle further. I see it now trying to grow the filesystem during the virt-resize, but it gets this error:

virt-resize: error: libguestfs error: mount: mount exited with status 32: 
mount: wrong fs type, bad option, bad superblock on /dev/sda3,
       missing codepage or helper program, or other error

I am on RHEL 7.6. The CRC VM is Red Hat Enterprise Linux CoreOS release 4.1
however, it looks like it might be this:

https://bugzilla.redhat.com/show_bug.cgi?id=1671235

@cfergeau
Copy link
Contributor

cfergeau commented May 15, 2019

CoreOS 4.1 is based on RHEL8, so it's indeed going to hit the limitation mentioned in these bugs [ref]

With the release of RHEL8, both the XFS and ext4 filesystems have gained new
features and capabilities. XFS now has reflink capability for file-level
copy-on-write, as well as (optionally) a reverse mapping btree to map disk blocks
back to files for future enhanced repair capabilities. ext4 has been enhanced to
include metadata checksums for robust detection of on-disk corruption.

Each of these new features is read-only compatible with RHEL7, meaning that a RHEL7
kernel is able to mount a RHEL8 filesystem which contains these features in readonly
mode, but read/write mode is not possible.

As a result, certain virtual image manipulation tools such as guestfish, virt-customize,
or any other utility based in libguestfs will be unable to manipulate RHEL8 images
on a RHEL7 system if the filesystems within the image contain these new features.

While there is no workaround for this limitation, it is possible to disable the
new features at mkfs time, if the features are not required or desired. For XFS,
to make a RHEL7-compatible filesystem, use the "-m reflink=0" option at mkfs time.
For ext4, use "-O ^metadata_csum" at mkfs time.

@jmazzitelli
Copy link
Author

jmazzitelli commented May 15, 2019

CoreOS 4.1 is based on RHEL8, so it's indeed going to hit the limitation mentioned in these bugs [ref]

OK, then based on that... this is a high priority:

We can probably make the size of /dev/vda3 filesystem bigger on the image we ship. The image is thinly provisioned, so this should not have a noticeable impact on the tarball size.

Its high priority at least for anyone needing more than 15G in their cluster and they are using a Red Hat CSB machine or anyone using RHEL 7.x - they can't resize the image and thus hosed and unable to use CRC.

@jmazzitelli
Copy link
Author

if the features are not required or desired. For XFS,
to make a RHEL7-compatible filesystem, use the "-m reflink=0" option at mkfs time.
For ext4, use "-O ^metadata_csum" at mkfs time.

Or the CRC team can see if those XFS features are really needed (I have no idea what these new featuers are) ... if not, the crc image's filesystems should be created with those options so they are compatible with RHEL 7.x systems.

@gbraad
Copy link
Contributor

gbraad commented May 16, 2019

and have crc run the necessary cmds to do this resizing. Because whatever size you ship with, someone is going to want more :)

This is planned for later releases. We will first focus on the support of other hypervisors before venturing into supporting 'advanced' usecases. With advanced it is meant to say 'configuration changes' that are needed against the OOTB experience. I'll try to make this visible from the project kanbans at https://github.com/code-ready/crc/projects

@gbraad
Copy link
Contributor

gbraad commented May 17, 2019

Original issue has been modified to mention possible resizing commands

@gbraad gbraad changed the title be able to resize amount of disk space available to crc VM Command to resize the disk May 17, 2019
@jmazzitelli
Copy link
Author

In the meantime, I would suggest adding an FAQ or some documentation to tell the user how to expand the resources used by the CRC image should they need it. Here's a quick summary of how to increase memory, CPU and disk (this is what I do). All of these instructions assumes the CRC VM is currently running.

For MEMORY:

# increase memory to 16GB
virsh -c qemu:///system setmaxmem crc 16000000 --config
virsh -c qemu:///system setmem crc 16000000 --config
crc stop
crc start

For CPUS:

# increase virtual CPUs to 5
virsh -c qemu:///system setvcpus crc 5 --maximum --config
virsh -c qemu:///system setvcpus crc 5 --config
crc stop
crc start

For DISK SPACE:

# increase the /dev/sda3 (known as vda3 in the VM) disk partition size by an additional 15GB
# You need to have installed (e.g. with dnf or yum) libguestfs-tools and libguestfs-xfs for this to work
# These steps will take a long time to complete - be patient.
# NOTE: You cannot do this if you are on RHEL 7.x due to: https://access.redhat.com/solutions/3914591
#    Workaround for that problem:
#    1. Copy the "crc" machine image ($HOME/.crc/machine/crc/crc) to a Fedora29+ or RHEL8 machine
#    2. While logged onto that F29+/RHEL8 machine, set CRC_MACHINE_IMAGE to the location of your new copy of the "crc" machine image
#    3. Run the sudo steps below on that new "crc" machine image file
#    4. Copy the newly resized "crc" machine image file back to its original location on your RHEL 7 machine
#    5. Start CRC again.
CRC_MACHINE_IMAGE=${HOME}/.crc/machines/crc/crc
crc stop
sudo qemu-img resize ${CRC_MACHINE_IMAGE} +15G
sudo cp ${CRC_MACHINE_IMAGE} ${CRC_MACHINE_IMAGE}.ORIGINAL
sudo virt-resize --expand /dev/sda3 ${CRC_MACHINE_IMAGE}.ORIGINAL ${CRC_MACHINE_IMAGE}
sudo rm ${CRC_MACHINE_IMAGE}.ORIGINAL
crc start

@kowen-rh
Copy link
Contributor

@jmazzitelli We can certainly add this to the known issues that we're collecting to address #131. I've been keeping an eye on it for that exact reason. ;)

@sangupta1729
Copy link

Hi,

I need to increase the disk size of my crc machine running on mac pro. Can someone pls share the commands to do the same.

@gbraad
Copy link
Contributor

gbraad commented Oct 5, 2019

@stale
Copy link

stale bot commented Feb 11, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale Issue went stale; did not receive attention or no reply from the OP label Feb 11, 2020
@morningspace
Copy link

morningspace commented Feb 20, 2020

Happen to come across here and tried the workaround described in above comment . It works until CRC v1.5.0, however, I noticed that the same approach doesn't work since v1.6.0, where, the file system mapped to /sysroot has been changed from /dev/vda3 to /dev/mapper/coreos-luks-root-nocrypt. And, by running below cmd:

$ sudo virt-filesystems --long -h --all -a $HOME/.crc/machines/crc/crc

I can see it's mapped to /dev/sda4, so I increased its size from 30G to 60G using qemu-img and virt-resize.

Then, I login to the VM:

[core@crc-w6th5-master-0 ~]$ sudo fdisk -l
Disk /dev/vda: 61 GiB, 65498251264 bytes, 127926272 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 00000000-0000-4000-A000-000000000001

Device       Start       End   Sectors  Size Type
/dev/vda1     2048    788479    786432  384M Linux filesystem
/dev/vda2   788480   1048575    260096  127M EFI System
/dev/vda3  1048576   1050623      2048    1M BIOS boot
/dev/vda4  1050624 127926238 126875615 60.5G Linux filesystem


Disk /dev/mapper/coreos-luks-root-nocrypt: 60.5 GiB, 64943537664 bytes, 126842847 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
[core@crc-w6th5-master-0 ~]$ lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                          252:0    0   61G  0 disk
|-vda1                       252:1    0  384M  0 part /boot
|-vda2                       252:2    0  127M  0 part /boot/efi
|-vda3                       252:3    0    1M  0 part
`-vda4                       252:4    0 60.5G  0 part
  `-coreos-luks-root-nocrypt 253:0    0 60.5G  0 dm   /sysroot

It shows Disk /dev/mapper/coreos-luks-root-nocrypt is 60.5G, but:

[core@crc-w6th5-master-0 ~]$ df -h
Filesystem                            Size  Used Avail Use% Mounted on
devtmpfs                               13G     0   13G   0% /dev
tmpfs                                  13G  168K   13G   1% /dev/shm
tmpfs                                  13G  8.8M   13G   1% /run
tmpfs                                  13G     0   13G   0% /sys/fs/cgroup
/dev/mapper/coreos-luks-root-nocrypt   31G  8.5G   23G  28% /sysroot
/dev/vda1                             364M   84M  257M  25% /boot
/dev/vda2                             127M  3.0M  124M   3% /boot/efi
tmpfs                                 2.5G  4.0K  2.5G   1% /run/user/1000

/dev/mapper/coreos-luks-root-nocrypt is still 31G.

Any ideas? @jmazzitelli @cfergeau @gbraad

@gbraad
Copy link
Contributor

gbraad commented Sep 30, 2020

Thanks Jeff (@jeffsaremi)

@jcordes73
Copy link

sudo qemu-img resize find ~/.crc/cache/ -name crc.qcow2 +90G

I tried this and I get an error (CRC 1.16, using +20G):

INFO Checking size of the disk image /home/jcordes/.crc/cache/crc_libvirt_4.5.9/crc.qcow2 ...
ERRO Invalid bundle disk image '/home/jcordes/.crc/cache/crc_libvirt_4.5.9/crc.qcow2': Expected size 10592321536 Got 10592322352
Invalid bundle disk image '/home/jcordes/.crc/cache/crc_libvirt_4.5.9/crc.qcow2': Expected size 10592321536 Got 10592322352

So I adjusted the size in vi ~/.crc/cache/crc_libvirt_4.5.9/crc-bundle-info.json to get this working.

"storage": {
"diskImages": [
{
"name": "crc.qcow2",
"format": "qcow2",
"size": "10592322352",
"sha256sum": "05fe2c5b33f99f02b0a34f05dced304a756642bea706c055a77cb93ba77a89fc"
}
]
},

@cfergeau cfergeau added this to In progress in Sprint 191 Oct 6, 2020
@cfergeau cfergeau mentioned this issue Oct 20, 2020
@cfergeau cfergeau moved this from In progress to Done in Sprint 191 Oct 26, 2020
@cfergeau cfergeau moved this from Done to In progress in Sprint 191 Oct 26, 2020
@cfergeau cfergeau added this to To do in Sprint 194 Dec 8, 2020
@cfergeau cfergeau removed this from To do in Sprint 194 Dec 8, 2020
@gbraad gbraad unpinned this issue Feb 1, 2021
@gbraad gbraad added this to To do in Sprint 198 Mar 2, 2021
@sspeiche
Copy link
Contributor

@gbraad what is left to do with this issue as I see it is done (Sprint 191)?

@cfergeau
Copy link
Contributor

It's implemented on Windows and linux, it's still missing on macOS #1640
crc-org/machine-driver-hyperkit#30 is a first step towards solving this.

@gbraad
Copy link
Contributor

gbraad commented Mar 12, 2021

We decided to close the issue and open a new one for macos as it would need a lot more work than anticipated.

I added this issue during the meeting as i did not find the macos issue that quickly.

@cfergeau cfergeau added this to To do in Sprint 199 Mar 23, 2021
@cameronkerrnz
Copy link

Can someone please document how SSH into the VM is meant to work with vsock mode with Hyper-V. 'crc daemon' doesn't expose an SSH port AFAIK.

CodeReady Containers version: 1.24.0+5f06e84b
OpenShift version: 4.7.2 (embedded in executable)

I have done the following command:

Resize-VHD -Path "$global:homePath\.crc\cache\$crcFileName\crc.vhdx" -SizeBytes ( $DiskSizeGB * 1024 * 1024 * 1024 )

But now I need to SSH in to run xfs_growfs, but getting connection refused (api.crc.testing being 127.0.0.1)

❯ ssh -i C:\Users\me\.crc\cache\crc_hyperv_4.7.2\id_ecdsa_crc core@api.crc.testing
ssh: connect to host api.crc.testing port 22: Connection refused

(also, if 'crc daemon' or similar were to expose this, better not be on port 22, or port 2222 etc., as this is likely to clash)

@gbraad
Copy link
Contributor

gbraad commented Apr 14, 2021 via email

@cfergeau
Copy link
Contributor

Reszing the disk should not need manual commands inside the VM; so please
file an issue for Windows for this.

This is a known issue with 1.24 https://access.redhat.com/documentation/en-us/red_hat_codeready_containers/1.24/html/release_notes_and_known_issues/issues_on_microsoft_windows#disk_resizing_does_not_work_as_expected

This should be fixed in the 1.25 release which is getting out real soon now.

@kitty-catt
Copy link

  • Would it be possible to make the CRC vm disk use dynamic allocation, for example it ships with a minimal disk which can expand to 128 GB.
  • I am currently on RHEL7, .. had no luck with starting CRC on RHEL8 some 2 months ago. Would I be better of to switch to fedora if I want to resize disks?

@kitty-catt
Copy link

Perhaps it is of help to somebody:

  1. removed completed completed Tekton pods,and Tekton Pipeline Runs. Perhaps it releases emphemeral container storage for the images that were spun up and ran builds;
  2. restarted CRC. and that after that the disk pressure was gone. Perhaps it triggers a clean up?

oc get node / oc describe node/

ephemeral-storage 0 (0%) 0 (0%)

Warning EvictionThresholdMet 108m kubelet Attempting to reclaim ephemeral-storage
Normal NodeHasDiskPressure 108m kubelet Node crc-xl2km-master-0 status is now: NodeHasDiskPressure
Normal NodeHasNoDiskPressure 17m (x8 over 17m) kubelet Node crc-xl2km-master-0 status is now: NodeHasNoDiskPressure

@cfergeau
Copy link
Contributor

cfergeau commented May 5, 2021

I am currently on RHEL7, .. had no luck with starting CRC on RHEL8 some 2 months ago. Would I be better of to switch to fedora if I want to resize disks?

Disk resizing on RHEL7/RHEL8 should work in recent crc releases, if it does not this is a bug we need to fix (but please open a separate issue for this :)

@guillaumerose guillaumerose changed the title Command to resize the disk [Epic] Command to resize the disk Jun 21, 2021
@guillaumerose guillaumerose moved this from Shortlist to In progress in Backlog, blockers and roadmap Jun 21, 2021
@guillaumerose guillaumerose added kind/epic Large chunk of work and removed kind/task Workable task labels Jun 21, 2021
@guillaumerose guillaumerose moved this from In progress to Done in Backlog, blockers and roadmap Jun 23, 2021
@guillaumerose guillaumerose moved this from Done to In progress in Backlog, blockers and roadmap Jun 23, 2021
@guillaumerose
Copy link
Contributor

Next macOS release will contain everything needed.
The 3 OSes are now fully supported.

@guillaumerose guillaumerose moved this from In progress to Done in Backlog, blockers and roadmap Jul 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/epic Large chunk of work points/3 priority/major status/pinned Prevents the stale bot from closing the issue
Projects
Status: No status
Sprint 167
  
To do
Sprint 187
  
To do
Sprint 188
  
To do
Sprint 189
  
In progress
Sprint 190
  
In progress
Sprint 191
In progress
Sprint 199
  
To do
Development

No branches or pull requests