unresponsive everything

If you are experiencing problems with "Everything", post here for assistance.
Post Reply
FrozenGamer
Posts: 18
Joined: Sat Feb 06, 2016 5:37 pm

unresponsive everything

Post by FrozenGamer »

I have been having problems in general with search everything becoming unresponsive on a computer windows 10x64 32gb ram which is networked also to 2 servers i have. The file amount is pretty large. Just under 30 million objects. I assume that may be part of that problem. Also i index for size as well. Is there a log or something to figure out what causes it to be unresponsive? Sometimes it comes back sometimes it doesn't
I will also list another problem i am having with not being able to maximize a window as a separate question.
NotNull
Posts: 5461
Joined: Wed May 24, 2017 9:22 pm

Re: unresponsive everything

Post by NotNull »

Are you running 32-bit or 64-bit Everything on your (x64) Windows?
With 32-bit Everything and the amount of objects you have to index, you might run into memory issues. 32-bit programs can't address more then 2 à 3 GB RAM memory.
void
Developer
Posts: 16748
Joined: Fri Oct 16, 2009 11:31 pm

Re: unresponsive everything

Post by void »

FrozenGamer
Posts: 18
Joined: Sat Feb 06, 2016 5:37 pm

Re: unresponsive everything

Post by FrozenGamer »

NotNull wrote:Are you running 32-bit or 64-bit Everything on your (x64) Windows?
With 32-bit Everything and the amount of objects you have to index, you might run into memory issues. 32-bit programs can't address more then 2 à 3 GB RAM memory.
64 bit service and program. I removed one server today, which removes about 8 million files to see if that helps. Would the buffer size make a difference?> in the Index-Folder- settings, just under attempt to monitor changes (which is checked).
Janus
Posts: 84
Joined: Mon Nov 07, 2016 7:33 pm

Re: unresponsive everything

Post by Janus »

@FrozenGamer

At the moment I am running ~8M files, using ~650Mb of ram.
I have noticed that when I move dir trees around, or extract really large projects.
Everything gets sluggish.
Using talk manager and the diagnostic window, I traced the problem.
In task manager you find one core saturated at 95 to 100%, which can also make you mouse cursor blink funny.

When you alter the file structure, everything automatically reindexes the DB.
This can take seconds to over a minute for me, but I also have most of the indexes set, and fast sort turned on as well.

The real issue, is that each type of index though independent, are all run through a single core.
I am not sure why, but I have seen this in many programs that use VS, it seems to be a limitation.

If a way to do each index on its own core were found, then DB updates should be corecount times faster.
Though this may also make the whole system sluggish instead of just one program.


Janus.
FrozenGamer
Posts: 18
Joined: Sat Feb 06, 2016 5:37 pm

Re: unresponsive everything

Post by FrozenGamer »

After removing one of the servers and reducing from approx 29 million to approx 20 million files, the unresponsiveness has gone away. I was also moving files between the 2 servers which may have been contributing to problem (with it trying to keep up with changes on network share). Current memory usage is 1852MB with almost no cpu usage. I am still doing the file movement without problem now though, but the server that is having files removed is the one i am keeping indexed.
NotNull
Posts: 5461
Joined: Wed May 24, 2017 9:22 pm

Re: unresponsive everything

Post by NotNull »

The moving of files and your Everything settings how to handle this might be a factor in the unresponsiveness.

Assuming the client you use to move files from server A to server B, is also the client where Everything is running...

Here is a very nice explanation what this buffer does in case of network folders: viewtopic.php?t=6493

So if you also configured to "Rescan on full buffer" and Everything has a difficult time keeping up with all the changes (man, 30 million files! :o :shock: ), a full rescan of that networked folder can happen. And your Everything certainly will be unresponsive during that time ...
To test this, you could add your other server again, but this time disable the"Rescan on full buffer" for both servers.
Does that solve/mitigate the unresponsiveness?
(I know, Everything misses files in that case, but to narrow it down this is a useful test)
FrozenGamer
Posts: 18
Joined: Sat Feb 06, 2016 5:37 pm

Re: unresponsive everything

Post by FrozenGamer »

Rescan is disabled and i think it was disabled before, but if it wasn't disabled that would explain the unresponsiveness as i would think that 256 changes with an at times sluggish server would happen during large file transfers of 2 monitored servers. It it is causing a rescan, a rescan takes a long time, for full rescan of all shared folders, it takes hours. I will try re adding the 2nd server just in case i had the rescan enabled before. I assume a 3 am automated re index would fix the missed changes?
Janus
Posts: 84
Joined: Mon Nov 07, 2016 7:33 pm

Re: unresponsive everything

Post by Janus »

@ FrozenGamer

I know this is an odd question, but what samba/cifs versions are you running on your server?

I ask because I had a problem with an update on a server that dropped it back too SambaV1, and that slowed things a lot.

I currently run two servers on my lan, one for work projects, and the other for my private and open source stuff.
Indexing my for work server went from 1-2 minutes for ~1M files, to 10-13 minutes.
SmbV2 and later have more efficient bulk transfer capabilities, and are better setup for monitoring.

Also, if you are actively monitoring both servers while moving files, you are doubling your indexing load.
One entry for the copy, and one for the file that will be deleted, which is how moves work on the network filesystem level.
The actual entry is moved if it is on the same partition, otherwise it is copy and delete the original when done.

I am not sure your network is setup, but on mine I use a share per project.

\\servername\explorer++
\\servername\celestia
etc.

Granted, most of them contain multiple forks, but it lets me map drives for a particular fork if needed.

The number of monitored shares can effect performance as well.
Same basic idea as having lots of windows or tabs open on your web browser.

Just some thoughts.


Janus.
NotNull
Posts: 5461
Joined: Wed May 24, 2017 9:22 pm

Re: unresponsive everything

Post by NotNull »

FrozenGamer wrote:Rescan is disabled and i think it was disabled before
The disabling of rescan was just a test, to find out what's going on. If that is causing the unresponsiveness, we have a handle to start digging deeper.
It it is causing a rescan, a rescan takes a long time, for full rescan of all shared folders, it takes hours. I will try re adding the 2nd server just in case i had the rescan enabled before. I assume a 3 am automated re index would fix the missed changes?
Yes. that does! I don't know if it makes a difference, but intuitively I would scan them *after* each other. Better rescan them after each other (i.e. Server1 = 03:00 ; Server2 = 05:00)

Note:
If Everything misses a schedule (for example because your client or one of your servers is down), it will scan the next time Everything is started.
There is an Everything.ini entry to prevent re-scanning and thus only scan at schedules:

Code: Select all

folder_update_rescan_asap=0
You can also type /folder_update_rescan_asap=0 in the search bar and press Enter to let the program put this entry in the INI file)
NotNull
Posts: 5461
Joined: Wed May 24, 2017 9:22 pm

Re: unresponsive everything

Post by NotNull »

Janus wrote: [...]what samba/cifs versions are you running on your server?

I ask because I had a problem with an update on a server that dropped it back too SambaV1, and that slowed things a lot.

I currently run two servers on my lan, one for work projects, and the other for my private and open source stuff.
Indexing my for work server went from 1-2 minutes for ~1M files, to 10-13 minutes.
SmbV2 and later have more efficient bulk transfer capabilities, and are better setup for monitoring.
Good point!
I know SMBv1 was slower, but this is ... significant! Thanks for that info.

On the other hand: Everyone should have disabled SMBv1 Server by now, after the WannaCry and Petya outbreaks last year (abusing "SMBv1). It took a while, but even on the average QNAP/Synology NAS this can now be disabled (or maybe it is by default?)
Janus
Posts: 84
Joined: Mon Nov 07, 2016 7:33 pm

Re: unresponsive everything

Post by Janus »

@notnull

Whether I agree or not with killing off SmbV1, not everyone can.

I still deal with software and equipment that only speaks SmbV1.
The embedded/plc/homebrew market has lots of things like that in it.
I have had to troubleshoot so many problems because some IT 'person' updated some package without telling anyone.

While I agree that SmbV1 has issues, a lot of 'issues', it still just works.
It is simple to create an open share that anyone can read or write to, which is what is needed most of the time for home users.
That said just to be fair, for most circumstances, yes, kill it.

However, many pre bluray compatible network media players only speak SmbV1 for instance.
Mp4/MKV/H264/X264 capability is not an indicator, I have seen them from just the last coupe of years that also do not.
A lot of NAS stand alone boxes also only speak SmbV1, again, not just the old ones.
Low end hardware focuses on just make it work, which with SmbV1, is simple and direct.

People tend to forget that standalone hardware tends to just sit there being network furniture until something breaks.
You can not always update the OS/Drivers in those.
Either there is some limitation of the CPU/Hardware, or there is simply not room in the flash/rom chip.
I ignore those that use a proprietary HDD format, those are just junk, and I have spent many hours rebuilding them after a failure.

So yes, kill SmbV1, but be prepared to bring it back.
There is hardware/software that says, 'hey do you speak SmbV1?', great, 'how about SmbV2?', and so on.
Want to guess what happens nothing responds to SmbV1?

The largest reason I am learning C/C++ is to deal with newer software, which nearly always seems to be written in them.
I prefer more direct programming, it is easier for me.
As for the languages themselves.
I hate them.

I deal with hardware, at the register level.
I do not like the way they {C/C++} abstract things, I always want to know what size a variable is, they are as annoying as a GUI that hides things.
While they work well for GUIs, they are horrid at the low level where I work.


Janus.
NotNull
Posts: 5461
Joined: Wed May 24, 2017 9:22 pm

Re: unresponsive everything

Post by NotNull »

@janus:

Interesting read! There are a lot more circumstances where SMBv1 just can't be disabled (be it server or client). Industrial automation, medical appliances ... Utility companies (gas, electricity, etc) use older hardware and operating systems to control their [can't find an English word], They have to, because the drivers of (extremely expensive) extension cards don't support newer OSes.
But this is all going way off-topic ... Maybe some other time,
Janus
Posts: 84
Joined: Mon Nov 07, 2016 7:33 pm

Re: unresponsive everything

Post by Janus »

@NotNull

While I agree that this is a little off topic.
It also illustrates a common problem in software, especially the last few years.
Feature and mission creep.

I fear that if to many capabilities are added to everything at its core, it could suffer the same fate.
Right now, it does one thing, and it does it wonderfully.

For instance, I wanted to get the size of folders for use in the details view in explorer++ file window.
I tried calling directly, and while it faster than letting it add up the file sizes, it bogged my machine down.
100+ DB calls in a row does that, so I needed a better solution.

I went another direction, that failed, so I went another.
What I ended up doing was adding to the SDK dll files used to talk to everything from another program.
https://voidtools.com/forum/viewtopic.php?f=5&t=6687
I may add IPC calls later, but not right now.
I added a cache function with its own calls.
When you check the cache, it checks if it has the same parent directory as the last call.
If it does not, it updates the cache entry, the cache, then checks the cache for its own entry.
The way it works is a database search is done for the parent directory.
That parent directory then records the sizes of all its subdirectories.
Since when you are displaying folder sizes in details view, you will have many calls where only the last directory is different.
I have it record the results for the common parent, and return them.
This lets me have 1 second response for 100+ folder sizes.
It also adds an api call for any other file manager that may have a need for it.


Adding this to the core of everything while convenient, is not needed, and adds complexity.
I made it an addon that can be ignored.
It allocates no memory until it is called.
It only interacts with everything when updated.

My current code is over in 'SDK question', and I will update it once I am done cleaning it up.
At this time it structured to be walked through while I fought through figuring the whole string thing.
Always start with the smallest string routine name that might work, and stringsafe is a joke.


The more 'features' are added the everything core, the easier it is to get an unresponsive state.
The same as in many embedded applications, where the capabilities of SmbV2 or later make no sense.
I say make no sense because they add no functionality when they are added.

The greatest cause of vulnerabilities in computers is adding functions just to add them.
That is the source of nearly all windows vulnerabilities.
They could have stopped 99% of them at any time, all they had to do was turn on stack and parameter checking.
However, that takes time, and with today's operating systems that is even more so.
You turn all that on and you drag the GUI response speed on vista and later down to a crawl.

A modern computer with XP and all the stack and parameter checking turned on, will run just fine.
It can run the desktop on CPU alone, or through the GPU like vista and later are hardcoded to, it doesn't care.
It has a base core of functionality, that is added to as the hardware is.
You try turning off hardware rendering and turning on stack and parameter checking in vista and later, it will just crawl.
No matter how fast your machine is, if you try that, it is a fail.

Instead of fixing the underlying problems, M$ moved stuff out of kernel space, then added some checks on the kernel space calls.
The kernel space call checking is actually faster than parameter checking on each individual call, and was quicker to implement.
This isolates the kernel, but leaves user land wide open.
Which is why your printer driver can go down, or nearly anything else, without taking the OS.
The cost is slower printer driver and GUI systems, which is hidden by how fast modern hardware is.
I would have kept the number of calls down and checked parameters myself, but I prefer smaller attack surfaces.

The same applies with 'everything'.
If to much is added to the core, instead of the gui, it can get boggy in a hurry.
Which given how snappy and responsive it is now, I want it to stay the way it is.

Though multicore sorts, one per index, would be a nice addition.

Just my thoughts.


Janus.
Post Reply