Storage size issue

Discussions related to the 1.36.x series of ZoneMinder
Post Reply
danmitch1
Posts: 11
Joined: Sat May 25, 2024 8:54 pm

Storage size issue

Post by danmitch1 »

Hi guys,

Ive added a new storage disk on my zm server (ubuntu 22.04 Jammy):

root@ubuntu-tiny-3:~# df -h /mnt/nvme-storage
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 234G 28K 222G 1% /mnt/nvme-storage

But when adding it to zoneminder its only showing a portion of the free space:

2 NewStorage /dev/nvme0n1p1 local Medium 0.00B of 7.69GB

Any hints or tips will be greatly appreciated!

Regards,

Dan
danmitch1
Posts: 11
Joined: Sat May 25, 2024 8:54 pm

Re: Storage size issue

Post by danmitch1 »

Bah! ignore this, I created a directory in the new mount, changed the path, now its working.. Newbie at linux..
danmitch1
Posts: 11
Joined: Sat May 25, 2024 8:54 pm

Re: Storage size issue

Post by danmitch1 »

I am not sure though, why its showing usage.

root@ubuntu-tiny-3:/mnt/nvme-storage/zm-events# df -h /mnt/nvme-storage/zm-events/
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 234G 32K 222G 1% /mnt/nvme-storage


2 NewStorage /mnt/nvme-storage/zm-events local Medium 11.94GB of 233.67GB 0 using null
mikb
Posts: 661
Joined: Mon Mar 25, 2013 12:34 pm

Re: Storage size issue

Post by mikb »

danmitch1 wrote: Sun Jun 09, 2024 9:01 pm I am not sure though, why its showing usage.

root@ubuntu-tiny-3:/mnt/nvme-storage/zm-events# df -h /mnt/nvme-storage/zm-events/
Filesystem Size Used Avail Use% Mounted on
/dev/nvme0n1p1 234G 32K 222G 1% /mnt/nvme-storage
Used: 32K ? That will be because you created a directory or two. It's nothing in the overall scheme of things.

Also, there is always an amount of space reserved for "the super user" which eats up a percentage of the truly available space. You can reduce this amount to zero if you really feel the need to squeeze every byte out, but you usually do this when first creating the filesystem. You may be able to "tune" the filesystem afterwards to change that value, and reserve 0 for super user.

This is done to prevent a normal user filling up the file system to actual 100%, leaving no space for root to be able to e.g. create a script to fix things, to see log files etc. It shouldn't be an issue on a CCTV-only volume.
danmitch1
Posts: 11
Joined: Sat May 25, 2024 8:54 pm

Re: Storage size issue

Post by danmitch1 »

Thanks for your reply.

What im wondering is why zoneminder sees 11.94GB used while ubuntu shows 32kb?
danmitch1
Posts: 11
Joined: Sat May 25, 2024 8:54 pm

Re: Storage size issue

Post by danmitch1 »

Reserved block count: 3125725

I guess this explains it?
mikb
Posts: 661
Joined: Mon Mar 25, 2013 12:34 pm

Re: Storage size issue

Post by mikb »

danmitch1 wrote: Fri Jun 14, 2024 10:39 pm Reserved block count: 3125725

I guess this explains it?
At 4096 bytes (4k per block) that's 12.8E9 bytes, so close enough ...

Also, 5% of your 237G is ... 11.6G -- and co-incidentally 5% is the default reserved block count.

You definitely could recover most of that by setting reserved blocks to zero.

The sense that "this is a lot of reserved space for nothing, right??" is echoed here

https://askubuntu.com/questions/19504/r ... n-os-disks

Here is someone from 2020 with a 236G file system and 11G gone :(

https://sleeplessbeastie.eu/2020/05/18/ ... em-blocks/

And further down that link they show an example of setting reserved to ZERO with

Code: Select all

sudo tune2fs -m 0 /dev/sda3
Obviously -- use the /dev/sdXY for your drive and partition number for the filesystem. If you're using ext2/ext3/ext4 the above command should work for you and give you back that space.

I've got into the habit of setting reserved to zero when first mkfs.ext3-ing my data partitions (only) because of this sort of thing!
dougmccrary
Posts: 1320
Joined: Sat Aug 31, 2019 7:35 am
Location: San Diego

Re: Storage size issue

Post by dougmccrary »

Upvote... :)

except ext3?
danmitch1
Posts: 11
Joined: Sat May 25, 2024 8:54 pm

Re: Storage size issue

Post by danmitch1 »

Thank you for that explanation, makes a lot of sense now!

Gonna have to give this a try.

Much appreciated!
mikb
Posts: 661
Joined: Mon Mar 25, 2013 12:34 pm

Re: Storage size issue

Post by mikb »

dougmccrary wrote: Sat Jun 15, 2024 9:46 pm Upvote... :)

except ext3?
From man: "tune2fs - adjust tunable file system parameters on ext2/ext3/ext4 file systems"

So yes, including ext3. Just because there's a 2 in the name, doesn't mean it only does ext2 :D
dougmccrary
Posts: 1320
Joined: Sat Aug 31, 2019 7:35 am
Location: San Diego

Re: Storage size issue

Post by dougmccrary »

when first mkfs.ext3-ing my data partitions
OK, I forgot your experience probably predates ext4 :lol:
mikb
Posts: 661
Joined: Mon Mar 25, 2013 12:34 pm

Re: Storage size issue

Post by mikb »

dougmccrary wrote: Mon Jun 17, 2024 8:11 am
when first mkfs.ext3-ing my data partitions
OK, I forgot your experience probably predates ext4 :lol:
It really does ... I was using the default ext2 when I started (Slackware on 50 floppy disks, 1993 era), and when ext3 turned up it was basically "ext2 with journaling bolted on" so had no worries switching to it,

Ext4 was only created in 2008 and I can't remember exactly why, but in 2010 when I built my NAS/RAID server, I did not go for ext4. Maybe there were still some concerns to work out before it became mainstream reliable. Dunno. But my notes do show that all the data filesystems were created ext3!
dougmccrary
Posts: 1320
Joined: Sat Aug 31, 2019 7:35 am
Location: San Diego

Re: Storage size issue

Post by dougmccrary »

I did not go for ext4.
One doesn't want to jump on the new stuff until it's been thoroughly field tested, does one? :)
mikb
Posts: 661
Joined: Mon Mar 25, 2013 12:34 pm

Re: Storage size issue

Post by mikb »

dougmccrary wrote: Mon Jun 17, 2024 8:25 pm
I did not go for ext4.
One doesn't want to jump on the new stuff until it's been thoroughly field tested, does one? :)
I realise you are gently poking fun (I accept that!), but the serious answer is that at that time (2010) ext4 *wasn't* thoroughly field tested. I'm more than happy to let other people do the whole "bleeding edge testing" and find out what's wrong with it.

The reason I built the NAS/RAID thing in the first place was a total (justified) paranoid distrust in garbage "NAS" devices and flaky hard drives. I was trying to stamp out all possible sources of problems by using stuff that works, that I understand, and that isn't going to eat my data when I'm not looking. Adding enough md5summing, paranoia check-and-verify, idiot-user-proofing based on dumb stuff I've done/seen done/heard about.

It's 14 years later. It is still working as designed ...

Rock and roll stuff like "hey, let's upgrade the ext3 filesystem to ext4, just because!" is not on my to-do list in that 14 years.
dougmccrary
Posts: 1320
Joined: Sat Aug 31, 2019 7:35 am
Location: San Diego

Re: Storage size issue

Post by dougmccrary »

I understand. I'm not sure of the timeframe, but I do recall going through HDDs like they were going out of style. And in a sense, maybe they were...
Post Reply