Total Size On Disk breaks properties indexing?
-
- Posts: 71
- Joined: Tue Oct 08, 2019 6:42 am
Total Size On Disk breaks properties indexing?
So I was trying to index Total Size On Disk on a seemingly broken SSD and I thought because a few others have had no issues that there is an issue with that SSD.
But now I do the same also with NTFS indexing on a different device, HDD, and it seems to be entirely stuck.
It has managed to do 13 million folders, but 60,000 it is just stuck trying to do.
At least with this device, it is doing new folders or modified folders, while the SSD one is just broken and I think it refused to.
Why is Total Size On Disk the one breaking it and is there a way to still get it?
A differently named WPS version?
A way to pause it for any folder currently struggling with it while allowing it to continue indexing new or modified folders?
A way to find out what files/folders may be causing it to break? I think just like the SSD, if I restart the db it will successfully index some of the 60,000 folders..If that is the case, restarting a bunch will narrow down which folders are causing it but that's a lot of narrowing down manually and it can break quickly after launching and it does not restart quickly given its 53 or so million files indexed.
I could keep the current db on as it still does most folders, problem is it scans everything constantly.
The device is clearly not happy about it and it lags the device, if I do tasks I've done before it might freeze and going to This PC will result it in looking like it's offline because EBV is basically using all resources on it and I have to wait at that point.
It mostly doesn't reach the state of freezing like that, but it is a sign that it's being heavily used.
I mention NTFS because I am not sure it's necessarily the properties itself and maybe NTFS and properties as a db with folder monitor for some of the folders it maybe is failing on has worked just fine.
Maybe that's just that something breaks before it reaches those folders and if I restart it enough times it'll eventually get to those folders like it has on the folder monitoring version
Or it's that I had been indexing the same properties from the start of those folders while with the NTFS version I am scanning the whole device I am indexing on and it's old and already has 50+ million files on it to index.
I doubt a whole new index will fix this issue and instead lead to it reading the device a bunch more and still ending on folders that cause problems just like on the SSD where it was small enough that I did try a whole new index by removing the old db and it eventually reached the same problem.
Maybe debug logging could help shed a light on what exactly is causing the issue but it does seem to get stuck on stuff, at least on the SSD, that it later got through indexing after restarts. So not sure knowing what it gets stuck on helps know what it permanently would get stuck on. (And it would probably require thousands of restarts to get to the more permanently stuck files/folders if we look at how few files and folders managed to be indexed eventually by restarts on the SSD db below)
Here are states I noted after restarts for the SSD before I gave up and excluded total size on disk entirely because not even excluding the last few folders fixed the issue, not even excluding all folders fixed it. Only removing the property completely did.
Both totals is which had no total size on disk, each new folder was a restart where it managed to index more, or not.
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\78 files, 193 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\75 files, 191 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\56 files, 179 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\48 files, 171 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\37 files, 166 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\32 files, 166 folders again"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\27 files, 162 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\24 files, 162 folders again"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\24 files again, 162 folders again"
Maybe it's my system or my storage devices..Or maybe it can replicated on any device that already has a million or more files and folders on it?
I don't know if you could test this on a storage device yourself and get the same result, maybe it has to be broken in some way.
I think the SSD is broken in some way or it's the SATA port that rarely, every few months, causes it to suddenly act like the device is offline.
I now run the SSD externally, so restarting it is as easy as switching off the housing and on, unlike SATA where I have to restart the whole machine for that one device to work again. Time will tell if it fails to be kept on the entire time without an inaccessible error eventually.
Either way the HDD definitely has to be broken in some way, it's 8 years old and has been read I believe hundreds of millions of times.
I only still use it because I've had worse experiences with newer supposedly sturdier devices that fail after some months, server-type HDDs I guess..EXOS.
Some HDDs are built different I suppose.
But now I do the same also with NTFS indexing on a different device, HDD, and it seems to be entirely stuck.
It has managed to do 13 million folders, but 60,000 it is just stuck trying to do.
At least with this device, it is doing new folders or modified folders, while the SSD one is just broken and I think it refused to.
Why is Total Size On Disk the one breaking it and is there a way to still get it?
A differently named WPS version?
A way to pause it for any folder currently struggling with it while allowing it to continue indexing new or modified folders?
A way to find out what files/folders may be causing it to break? I think just like the SSD, if I restart the db it will successfully index some of the 60,000 folders..If that is the case, restarting a bunch will narrow down which folders are causing it but that's a lot of narrowing down manually and it can break quickly after launching and it does not restart quickly given its 53 or so million files indexed.
I could keep the current db on as it still does most folders, problem is it scans everything constantly.
The device is clearly not happy about it and it lags the device, if I do tasks I've done before it might freeze and going to This PC will result it in looking like it's offline because EBV is basically using all resources on it and I have to wait at that point.
It mostly doesn't reach the state of freezing like that, but it is a sign that it's being heavily used.
I mention NTFS because I am not sure it's necessarily the properties itself and maybe NTFS and properties as a db with folder monitor for some of the folders it maybe is failing on has worked just fine.
Maybe that's just that something breaks before it reaches those folders and if I restart it enough times it'll eventually get to those folders like it has on the folder monitoring version
Or it's that I had been indexing the same properties from the start of those folders while with the NTFS version I am scanning the whole device I am indexing on and it's old and already has 50+ million files on it to index.
I doubt a whole new index will fix this issue and instead lead to it reading the device a bunch more and still ending on folders that cause problems just like on the SSD where it was small enough that I did try a whole new index by removing the old db and it eventually reached the same problem.
Maybe debug logging could help shed a light on what exactly is causing the issue but it does seem to get stuck on stuff, at least on the SSD, that it later got through indexing after restarts. So not sure knowing what it gets stuck on helps know what it permanently would get stuck on. (And it would probably require thousands of restarts to get to the more permanently stuck files/folders if we look at how few files and folders managed to be indexed eventually by restarts on the SSD db below)
Here are states I noted after restarts for the SSD before I gave up and excluded total size on disk entirely because not even excluding the last few folders fixed the issue, not even excluding all folders fixed it. Only removing the property completely did.
Both totals is which had no total size on disk, each new folder was a restart where it managed to index more, or not.
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\78 files, 193 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\75 files, 191 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\56 files, 179 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\48 files, 171 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\37 files, 166 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\32 files, 166 folders again"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\27 files, 162 folders"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\24 files, 162 folders again"
"C:\64-Portable\Everything v1.5.0.1383a x64\A2 MX500 SSD1\24 files again, 162 folders again"
Maybe it's my system or my storage devices..Or maybe it can replicated on any device that already has a million or more files and folders on it?
I don't know if you could test this on a storage device yourself and get the same result, maybe it has to be broken in some way.
I think the SSD is broken in some way or it's the SATA port that rarely, every few months, causes it to suddenly act like the device is offline.
I now run the SSD externally, so restarting it is as easy as switching off the housing and on, unlike SATA where I have to restart the whole machine for that one device to work again. Time will tell if it fails to be kept on the entire time without an inaccessible error eventually.
Either way the HDD definitely has to be broken in some way, it's 8 years old and has been read I believe hundreds of millions of times.
I only still use it because I've had worse experiences with newer supposedly sturdier devices that fail after some months, server-type HDDs I guess..EXOS.
Some HDDs are built different I suppose.
Re: Total Size On Disk breaks properties indexing?
damnnn a 1 page essay lol
-
- Posts: 71
- Joined: Tue Oct 08, 2019 6:42 am
Re: Total Size On Disk breaks properties indexing?
It would've been longer, I forgot to mention that it's not necessarily the folders causing issues but the files.
I have started counting how many folders and total items there are so for each restart I know how many it is doing.
It looks like it only indexes maybe 50 items total for every restart and there are at least 100,000 items, the 60,000 folders and uncounted files, to index.
So my assumption that it will index a few files and folders, or only files on some restarts, is correct and the maybe 140,000 items will take thousand of restarts.
I do not think my OS SSD, a shitty one because cheap, likes all the GBs of re-writes every restart of this HDD db.
Re: Total Size On Disk breaks properties indexing?
I do not recommend indexing Total Size on Disk.
Gathering the total size on disk for folders is extremely expensive.
Everything doesn't use any information from your index as this is a 'from disk' property.
For each folder, Everything will need to find all descendants, gather the size on disk for each descendant and compute the total size.
Gathering the size on disk for a single item is also slow.
Everything must open a handle to the file, find all the data clusters runs and add them all together to computer the total size.
Any change to a folder will cause Everything to re-gather the total size on disk.
This means re-gathering the total size for all the descendants.
Gathering the total size on disk for folders is extremely expensive.
Everything doesn't use any information from your index as this is a 'from disk' property.
For each folder, Everything will need to find all descendants, gather the size on disk for each descendant and compute the total size.
Gathering the size on disk for a single item is also slow.
Everything must open a handle to the file, find all the data clusters runs and add them all together to computer the total size.
Any change to a folder will cause Everything to re-gather the total size on disk.
This means re-gathering the total size for all the descendants.
Re: Total Size On Disk breaks properties indexing?
what about multi-threading will that help in any way to speed it up? or to speed up cached unchanged databases/structures ? the last solution ...what about a faster CPU/RAM with higher single thread performance ?
Re: Total Size On Disk breaks properties indexing?
It's going to come down to disk speeds.
Re: Total Size On Disk breaks properties indexing?
Is the only thing that Total Size On Disk giving you is the size adjusted based upon cluster size?
If so, is there any reason you're wanting that, particularly?
I could see if you had 1 million 36-byte text files & were using a 4K cluster size, that might point out an inefficient cluster size for some particular purpose, but other then that ... ?
If so, is there any reason you're wanting that, particularly?
I could see if you had 1 million 36-byte text files & were using a 4K cluster size, that might point out an inefficient cluster size for some particular purpose, but other then that ... ?
Re: Total Size On Disk breaks properties indexing?
Depends on the size_on_disk_type advanced setting.Is the only thing that Total Size On Disk giving you is the size adjusted based upon cluster size?
The default on Windows 10+ is: Show 0 for files stored in the MFT or round up to the nearest cluster size.
-
- Posts: 71
- Joined: Tue Oct 08, 2019 6:42 am
Re: Total Size On Disk breaks properties indexing?
I have no interest in cluster size and such.therube wrote: ↑Mon Oct 07, 2024 8:11 pm Is the only thing that Total Size On Disk giving you is the size adjusted based upon cluster size?
If so, is there any reason you're wanting that, particularly?
I could see if you had 1 million 36-byte text files & were using a 4K cluster size, that might point out an inefficient cluster size for some particular purpose, but other then that ... ?
It was to find folders to compare the "Size" and "Total Size on Disk", I still use it on any newer device but most of those will only have copies and all files there get filled on copy.
I could just keep Size on Disk, both will do the job it seems.
-
- Posts: 71
- Joined: Tue Oct 08, 2019 6:42 am
Re: Total Size On Disk breaks properties indexing?
It hasn't been slow for most devices and I haven't really needed it. It's just nice to be also to find the folders that have unallocated data instead of just the files within, especially if the folder has many smaller files.void wrote: ↑Mon Oct 07, 2024 3:47 am I do not recommend indexing Total Size on Disk.
Gathering the total size on disk for folders is extremely expensive.
Everything doesn't use any information from your index as this is a 'from disk' property.
For each folder, Everything will need to find all descendants, gather the size on disk for each descendant and compute the total size.
Gathering the size on disk for a single item is also slow.
Everything must open a handle to the file, find all the data clusters runs and add them all together to computer the total size.
Any change to a folder will cause Everything to re-gather the total size on disk.
This means re-gathering the total size for all the descendants.
Is there a way to freeze a property that has already been indexed, just to keep the indexing that has already happened? So far I only miss maybe 3,000 folders and maybe 40,000 files while near the start of counting I am guessing it was about 52,000 and 120,000 respectively.
Not freezing ALL properties, just the one. I am aware this would make that property outdated as soon as you pause, but it can give a rough estimation. Maybe this is a database-sided change, if so it could be something to add in the next database version if you plan on one.
Also is there info on all the properties? Allocation Size and Size on Disk seems to do the same thing for files, do I only need one of them if I at least want it for files? Or does one offer something over the other? Should I skip EBV's and use WPS' Size on Disk for that either way? Or is there a cheaper way to index the size on disk? It is mostly for unallocated data.
Re: Total Size On Disk breaks properties indexing?
Is there a way to freeze a property that has already been indexed, just to keep the indexing that has already happened?
You could use a separate instance, & open that in -read-only mode.
So do something like:
write existing .db to disk (so Quit Everything, or by other means)
copy that .db to your new instance & rename it to the instance name
so if original is Everything.db & new instance is called NDX, then rename Everything.db to Everything-NDX.db
open that new instance, NDX, in -read-only mode
That will give you access to the indexed data - that was, without attempting any further updates to it.
-
- Posts: 71
- Joined: Tue Oct 08, 2019 6:42 am
Re: Total Size On Disk breaks properties indexing?
Yeah, I always make a copy when I think index journal will be modified or in this case data in the main db, I ended up just removing total size properties on all existing instances instead of waiting..I didn't want B2 to keep being read from, it did not like it at all.therube wrote: ↑Thu Oct 10, 2024 6:12 pmIs there a way to freeze a property that has already been indexed, just to keep the indexing that has already happened?
You could use a separate instance, & open that in -read-only mode.
So do something like:
write existing .db to disk (so Quit Everything, or by other means)
copy that .db to your new instance & rename it to the instance name
so if original is Everything.db & new instance is called NDX, then rename Everything.db to Everything-NDX.db
open that new instance, NDX, in -read-only mode
That will give you access to the indexed data - that was, without attempting any further updates to it.
Would've been pointless to keep the data anyway, it would've just saved space and been in the live db. When I want the data that did exist I'll just extract that db, but I already know where most of the unallocated data is so it was convenience and not necessity to know the total size on folders.