-
-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] RAID Support #40
Comments
Info about RAID setup is obtainable via https://github.com/sebhildebrandt/systeminformation#9-file-system. Some new GUI options should be implemented for this to work though. For example you should be able to choose if you want to show the information for raw disk space (like I have a very similar setup to @velmirslac, in that I have 1x main SSD and 2x 4TB HDDs in ZFS pool mirroring each other.
Data sample:
From this you can group them by |
Thank you for providing me with all this information! I think I will leave it how it is right now as a default configuration and add a configuration option for "RAID Mode", just like you proposed. Only problem would be testing it, as I don't have a RAID running anywhere, so I hope I can get back to you @vangyyy and @velmirslac for any testing requests in the following days. |
I wanted to even attempt it myself, but am kind of short on time. I would certainly help you with testing. Just tag me when you need help. |
Ok, aside from not being able to test it and therefore not being sure what I am doing - I am not sure on how I would display it on the frontend in the end. Lets say you have one disk, then right now it will look like this:
If you have multiple drives, it would look like this:
I guess if there was a RAID, I could just add that to the type (e.g. As I don't really know how raids work and what is important when working with them, I don't know what I should display. |
Also, I am not entirely sure, but part of this issue is essentially blocked by this issue: #59 When I can't get the correct load for all drives, it will be hard to display the RAID information as well. |
I will close this due to inactivity, but if someone wants to help me resume work by providing more info, I will reopen. |
@mjefferys Can you please post the output of the following command here:
@velmirslac @vangyyy are you running the same type of RAID? I have never done any RAID config, so please help me understand what the different types are and how they could be shown. |
@MauriceNino thanks for looking into it, here's my output. Output
No idea why github is messing the formatting up on this. Grr |
Because you need triple backticks for code blocks :) Thanks for the output, I will look into it in the evening. |
Aha, I clicked the insert code block thing in the github editor but it didn't take. Appreciate you looking. |
No, I am not running the same type of RAID. @mjefferys appears to be running Linux software RAID using (mdadm)[https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm]. This type of RAID creates block devices you can then put a filesystem on and use as normal. The RAID volumes are the I'm using a different kind of software RAID called (ZFS)[https://en.wikipedia.org/wiki/ZFS]. ZFS creates virtual devices, called pools, and manages the filesystems, called datasets, in the pool. The ZFS system handles mounting, partitioning, metadata, etc. Output
|
Ok, I reviewed the outputs and am unsure how to go about this.
|
|
|
|
That makes sense, thanks for clearing that up. And about 2) that is definitely a bug in the current system then. Will need to fix this as well. The only problem now is, that I can't really map the usage stats to single drives or raids consistently, so while I might get the total size right, the used part will be off. What do you think about that? |
Could you use the output contained within the FS size node if you discover a RAID array? I guess you then end up rather than showing individual disks the file systems that are mounted, but potentially that's actually more relevant anyway in a RAID set up?
|
Thats what I will be doing (or am actually doing right now aswell). But if I take @velmirslac's output, I only see 2 drives around the 250GB mark, but if I understood it correctly, it should be 1x 250 GB drive and 1x 500 GB drive and the 500 GB drive should be the main one mounted at |
In my setup, one 240GB SSD is mounted as |
Did you mount the The 480 GB must be |
Assuming you are both running on an amd64 device, can you please pull the latest dev image ( |
I love where this is going. Looks very close!
|
🎉 This issue has been resolved in version 3.4.0 Please check the changelog for more details. |
@MrAlucardDante It is important that you mount your volumes exactly with this format: So long story short, remove Maybe I should remove the need for the |
I tried that too, like I said I went through the thread to figure it out, but the result is the same Here is the df -h of the updated volume :
And here is a screenshot after recreating the container, emptying the cache and refreshing the page |
Hm, that is really weird. Can you please send me the output of the following command?
|
Sure. Here it is : Output
|
What is weird here is that dashdot still sees my old raid on sda1 and sdc1 even tho it's gone. Maybe it is caching something somewhere. Here is the fdisk -l for comparison:
|
Thanks for the outputs - don't know why the mount is not listed there, can you please also run:
|
About that - I really don't know what I can do there - when the information is still saved somewhere, and it is read by |
There you go :
|
I have tagged you in an issue for your fsSize output issue, but for that other problem you are having with your configuration, I would suggest you to open up your own issue, because I do not have enough issue to provide, and I don't know what the maintainer (@sebhildebrandt) needs exactly. It would probably be a good idea to add the output of the following to the issue you create:
And then explain what your configuration is, and that you suspect that something is not quite right in the output. But I can not comment on that, because as I said I know next to nothing about RAIDs. |
I appreciate the help. I really think it's a "caching" issue (or data being hard written somewhere) because I reran the yarn cli command after removing the volume and I can still see the ZFS and RAID. I will add all the info to the issue you've opened |
My Issue has all necessary info, I think - you need to create a second one for your other problem (with the raids). |
The volume mount is only for reading the disk sizes ( |
Thank you for the clarification, I am not a Linux or Docker expert by no means |
No worries - I ain't either :) |
@MrAlucardDante Can you please pull the latest dev image ( |
@MauriceNino unfortunately, it is still the same |
Here is my compose, just for reference
|
Are you sure you did upgrade? Please use the main branch ( Does that change something? @MrAlucardDante If not, please run this command and paste the output: |
@MauriceNino you asked me to test with the dev tag, which I did. I might not be a Linux/Docker expert, but I know how to update a docker image 😉 I just tried with the latest tag as well, still the same output : Output
|
@MrAlucardDante Yeah I know, but that was a few days ago and I have made some other changes (in the storage area) in the meantime, so I was unsure which version you are on (and it is harder to check that for dev builds), that's why I was suggesting the latest build. And I didn't want to come off rude, telling you how to update images, I was just trying to make sure. In the selfhosted area there are also a lot of non-tech people around, so I am always trying to be as specific as possible :) I will look into that log output tomorrow and see what I can do. |
@MrAlucardDante The latest main release should fix your problems for the most part. |
@MauriceNino thanks for your hard work, we are close but not there yet 😉 It should be :
|
Great! Unfortunately, about your other issue, I can't do anything about that - that's an issue with your configuration leftovers, or an issue on |
Thank you, I am investigating with sebhildebrandt. Could it be possible to have a graph for each storage ? With the current setup, I have no way to know if my SSD is almost full or not, since it's aggregated with my big array |
Technically there is an option for that, but I don't think it will work for your setup as of right now. Normally, every disk in |
Indeed, ZFS doesn't set a mount point for the blockDevices.
After investigating with sebhildebrant, the issue on my end, The labels and types weren't fully erased and set when I switched from linux-raid to zfs. Thank you for your time and help. |
# [3.4.0](MauriceNino/dashdot@v3.3.3...v3.4.0) (2022-06-15) ### Bug Fixes * **api:** error on multiple default network interfaces ([3cf8774](MauriceNino/dashdot@3cf8774)), closes [#118](MauriceNino/dashdot#118) ### Features * **api, view:** add raid information to storage widget ([ba84d34](MauriceNino/dashdot@ba84d34)), closes [#40](MauriceNino/dashdot#40) * **api:** add option to select the used network interface ([8b6a78d](MauriceNino/dashdot@8b6a78d)), closes [#117](MauriceNino/dashdot#117) * **view:** add option to show in imperial units ([d4b1f69](MauriceNino/dashdot@d4b1f69))
Is your feature request related to a problem? Please describe.
The reported disk usage is simply a sum of all disks on the system, not real storage.
Describe the solution you'd like
Storage should monitor a configurable list of volumes, not just block devices.
Additional context
For example, I have a test server that has two 240 GB SSDs and two 480GB HDDs. Dashdot reports this as 1.4TB of storage with some tiny sliver as "used." However, those two HDDs and one of the SSDs are in a ZFS pool together. So the actual state of storage on the server is one 240GB volume with 5% used and one 480GB volume with 33% used.
The text was updated successfully, but these errors were encountered: