Date   

Re: Portable version degrading

Didier Colle
 

Hi Joseph,


DLL files: what dll files are reloaded all the time (vs those that get loaded once and remain in RAM)?

Executable files: to what executable files are you referring?

DLL and executable files are typically static: once written to disc, they are not modified anymore and thus can remain on the locations where they were initially written; they will not get fragmented over time.


and what not: is not very specific. Any files with dynamic content (the reason why I was referring to log and config files; files with dynamic content could potentially become subject to more fragmentation as they become longer/get changed)?


Kind regards,


Didier

On 20/01/2018 3:01, Joseph Lee wrote:
Hi,
Config files and log files are not the only thing NVDA will need to access from disks. Others include various DLL files, one or more executables and what not.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Didier Colle
Sent: Friday, January 19, 2018 5:47 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph,


On 20/01/2018 1:21, Joseph Lee wrote:
Hi,
Fragmentation will happen as long as new information is written in places that'll cause problems for fast reading later. Also, while something is running, the operating system will still need to access things on disk if asked by the program.
sure, of course.

Question:
Roger already answered that even with add-ons disabled sluggishness and weird behavior remains.
Thus what does nvda core (or at least non add-on related code) ask the OS to access on disc while being executed? Screen reading, the main function of nvda, seems to me to have very little to do with disc access (except writing logs and reading configuration (which is probably loaded in RAM anyway).

Kind regards,

Didier

As for swapping configurations: in theory, yes as long as the versions are compatible enough to not cause visible side effects. For example, if one swaps configurations between stable and next branches, that could raise problems in that some things required by next snapshots might not be present.
As for the add-on being the culprit: could be. One thing to try though: what if Roger runs his portable copy with all add-ons disabled? If that improves performance, then it could be an add-on, if not, we should try something else.
Implicating file systems: Roger did say this is an internal drive, hence I put more weight on possible fragmentation and data movement issues.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
Didier Colle
Sent: Friday, January 19, 2018 3:40 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug
while it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more
concrete symptom description is needed? (how does such "would be" bug
manifestate itself? Is it always the same "would be" bug or do many
"would be" bugs appear randomly? when do such "would be" bugs appear
(during loading, during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself
(or that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would
be the root cause, something should be found in the logs as loading
the relevant python modules should throw an exception (if these
exceptions are not logged, it should be possible to do so). Therefore,
I dismiss storage medium/filesystem corruption as root cause of the
above mentioned "would be" bugs (assuming bugs have to be interpreted
as broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness
with extremely long / infinite response times? Or do you get errors? or ...
Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a second?
A few seconds? A minute or more? When does sluggishness happen: at
time of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of
Roger's
issue: a possible contributing factor is constant add-on updates. He
uses an add-on that is updated on a regular basis, .. ..., potentially
fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be
a root cause of sluggishness, but only when access to storage medium
is needed and not during general execution which typically takes place
from RAM rather then from disc. Therefore, fragmentation issues appear
very unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies.
However, there seems to be nobody else experiencing the same problem.
Thus, I would translate this into the following question that you
would need to test/investigage further: is there a conflict between
the portable copies and your specific system setup, or is the issue
caused by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on
testing to the installed version: I guess you are using the portable
version for this purpose, exactly to avoid messing up the installed
version. Would you have the possibility to do the testing in for
example a virtual machine, such that you can test on an installed
instead of a portable copy version, while not messing up your main system with this testing?
Joseph, anyone else: is there a (possibly more cumbersome) way to
perform testing on an installed version while keeping at all times a
possibility to revert back to a stable/clean situation? (e.g., having
a .bat script that swaps configuration file and add-on directories
between stable and testing versions and that can easily be executed in
between exiting nvda and restarting it?) In case none of the above
options is tried, my suggestions would be then to regularly take
snapshot copies of your portable copy such that when degradation takes
place a diff between stable and degraded version can be taken and
investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium
problems/corruptions are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard
drive, it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the
file system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is
written repeatedly, as flash memory has limited endurance when it
comes to data reads and writes.
In case of Roger's issue: a possible contributing factor is constant
add-on updates. He uses an add-on that is updated on a regular basis,
putting strain on part of the drive where the add-on bits are stored.
Thus, some drive sectors are repeatedly bombarded with new
information, and one way operating systems will do in this case is
move the new data somewhere else on the drive, potentially
fragmenting bits of files (I'll explain in a moment). Thus one
solution is to not test all add-on updates, but that's a bit risky as
Roger is one of the key testers for this add-on I'm talking about.
Regarding fragmentation and what not: the following is a bit geeky
but I believe you should know about how some parts of a file system
(an in extension, operating systems) works, because I believe it'll
help folks better understand what might be going on:
Storage devices encountered in the wild are typically organized into
many parts, typically into blocks of fixed-length units called
"sectors". A sector is smallest unit of information that the storage
device can present to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive
(HDD) and when you wish to open it in Notepad, Windows will ask a
module that's in charge of organizing and interpreting data on a
drive (called a file system) to locate the sector where the document
(or magnets or flash cells that constitute the document data) is
stored and bring it out to you. To you, all you see is the path to
the document, but the file system will ask the drive controller (a
small computer inside hard disks and other storage devices) to fetch
data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve
waiting for a platter with desired sector to come to the attention of
a read/write head (a thin magnetic sensor used to detect or make
changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and
solid-state drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a
storage device, the file system in charge of the device will ask the
drive controller to either find a spot on a disk filled with magnets
and change some magnets, or apply heat pressure to dislodge all cells
on a block, erase the block, add new things, and fill the empty block
with modified data (including old bits). You can imagine how tedious
this can get, but as far as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage
device. The file system will repeatedly ask the drive hardware to
fetch data from specific regions, and will look for new locations to
store changes. On a hard drive, because there are limited number of
heads and it'll take a while for desired magnetic region to come to
attention of one, read speed is slow, hence increased latency
(latency refers to how long you have to wait for something to
happen). When it comes to saving things to HDD's, all the drive needs
to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems (rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up
the address (or sector) where the electrons comprising the data you
want is saved (akin to walking down a street grid), so no need to
wait for a sensor to wait for something to happen. This is the reason
why solid-state drives appear to respond fast when reading something.
On the other hand, writing or injecting electrons is very slow
because the drive needs to erase the entire block before writing new
data. In other words, just changing a letter in a document and saving
it to an SSD involves a lot of work, hence SSD's are slower when it
comes to writing new things, but because of the underlying technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to
some extent. If data is written to a drive, the drive controller will
process whatever it comes along its path. But file systems won't let
drive controllers get away with that: file systems such as NTFS (New
Technology File System) will schedule data writes so it'll have
minimal impact on the lifespan of a storage device. For hard disks,
it'll try its best to tell the drive to store file data in
consecutive locations in one big batch, but that doesn't always work.
For SSD's, the file system will ask the drive to storage new
information in different cells so all regions can be used equally (at
least for storing new information; this is called ware leveling). One
way to speed things up is asking the drive to reorganize data so file
fragments can be found in consecutive sectors or trim deleted regions
so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe
you need to know some things about how things work. It is also a
personal exercise to refresh my memory on certain computer science
topics (I majored in it not long ago, and my interests were mostly
hardware and operating systems, hence I was sort of naturally drawn
to screen reader internals and how it interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
Roger Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an
internal hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and
there's no file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise
there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they
replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of
them
have
plastic caps to put over the contacts or a slider to retract the
contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other
mistreated storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought
me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing
from the computer or they get corrupted. They physically fall apart
easily. And, the hardware inside seems to be more vulnerable to
static electricity
data
loss than other portable drives, certainly more vulnerable than
most computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of
nvda will sometimes degrade or get a bit corrupted over time all by
itself while the installed version is always stable as a rock.
Does anyone know why this is and is there any way to prevent this
from happening? I use the portable copy to test a couple add ons
and if the portable version corrupts, it can make it appear that
the add on is defective or has a bug while it really doesn't.
Deleting the portable copy and making a new one will clear it up.
I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all
gets
back
to normal after a complete flush and remake of the portable
version. As
I
say, this never has happened at all with my installed copy on the
same computer.





Roger










Re: Portable version degrading

Didier Colle
 

Dear Roger,


my apologies for being that technical.


Just another question:

when you have such degraded portable version, does that sluggish and weird behavior occur immediately after (re)starting that portable version, or does the situation deteriorate after a while it has been running? In other words, does it help to restart that degraded portable version, without creating a new fresh portable version?


Why I am asking this:
"Also, some functions like hitting the nvda key to either invoke the menu or quit nvda just won't work at all" looks a bit like what i sometimes encounter.
I have the CapsLock key assigned to act also as NVDA key. From time to time (not that often, way too infrequent to be able to do any diagnostics), my installed version of NVDA also becomes sluggish and pressing the CapsLock key toggles the CapsLock state (while when assigned as NVDA key the CapsLock state can only be toggled by pressing the CapsLock key twice). However, simply restarting (pressing ctrl+alt+n) my nvda installation, normalizes the situation. I guess that when I end up in that situation at least the CapsLock key looses all its capability to function as nvda key, but to be honest, I never really tested this as this situation mostly occurs in the worst possible moments and pressing ctrl+alt+n is that easy.
At least the symptoms appears to be similar: sluggishness going hand in hand with the nvda key loosing its function as nvda key. However, from your initial post I understood you had to create a fresh new portable version, while in my case simply restarting nvda is enough. (my above questions are to confirm this). The other difference is that you seem not to have the problem with the installed version, while I do have (although not often) the problem with the installed version. Thus probably we are perceiving different issues....

@Joseph or anyone else with deep knowledge of the nvda core code: do you see possible relationship between sluggishness and the nvda key loosing its capability to function as nvda key.
Would it be possible there are issues with the multithreading implementation in wx? I cannot reliably reproduce my problem, but my feeling/impression (for what it is worth) is it starts manifestating after the cpu has been heavily loaded for a while.... in the end breaking the proper behavior of the multithreading?

Kind regards,

Didier

On 20/01/2018 2:02, Roger Stewart wrote:
Boy, most of this is way over my head! However, I did try running it with add ons disabled and the sluggishness still occurred.  By this I mean that when I type a letter, it may take a second or so for it to be voiced while this is usually instantaneous.  Also, some functions like hitting the nvda key to either invoke the menu or quit nvda just won't work at all and I need to shut it down with my Pview utility.  Most of the other stuff I don't understand at all.  I know Joseph runs something called a virtual machine but it is on his Mac I think and I don't have a Mac here so I probably can't do this.  If this is a built in function of Windows, I'm not aware of it nor how to use it.  Also, I always run my portable copy on my F drive which is a mechanical hard drive.  My main drive is an SSD, so I don't want to update anything there any more often than necessary.  I use the portable version to test one particular add on, but this add on is nearly fully matured now and so should need no further testing except for the version for the new Python version to make sure everything is working fine.

Roger









On 1/19/2018 5:40 PM, Didier Colle wrote:
Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete symptom description is needed? (how does such "would be" bug manifestate itself? Is it always the same "would be" bug or do many "would be" bugs appear randomly? when do such "would be" bugs appear (during loading, during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the physical/electronic/magnetic integrity of the storage medium itself (or that the filesystem has set them aside such that they are not used anymore). In case corrupted/broken blocks on the storage medium would be the root cause, something should be found in the logs as loading the relevant python modules should throw an exception (if these exceptions are not logged, it should be possible to do so). Therefore, I dismiss storage medium/filesystem corruption as root cause of the above mentioned "would be" bugs (assuming bugs have to be interpreted as broken functionality).


* "I also notice a few functions of nvda either don't work at all or nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete symptiom description. What functions are you exactly speaking about? What does "not work at all" exactly mean: do you mean sluggishness with extremely long / infinite response times? Or do you get errors? or ... Is the sluggishness general or does it happen in those specific functions? What do you mean by sluggishness: response in only a second? A few seconds? A minute or more? When does sluggishness happen: at time of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's issue: a possible contributing factor is constant add-on updates. He uses an add-on that is updated on a regular basis, .. ..., potentially fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a root cause of sluggishness, but only when access to storage medium is needed and not during general execution which typically takes place from RAM rather then from disc. Therefore, fragmentation issues appear very unlikely to me.

* "while the installed version is always stable as a rock." and "I use the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies. However, there seems to be nobody else experiencing the same problem. Thus, I would translate this into the following question that you would need to test/investigage further: is there a conflict between the portable copies and your specific system setup, or is the issue caused by the add-ons under test?
To test the former possibility, why not using a fresh portable copy replicating the setup of your installed version instead of that installed version for a while?
To test the latter that would probably require moving the add-on testing to the installed version: I guess you are using the portable version for this purpose, exactly to avoid messing up the installed version. Would you have the possibility to do the testing in for example a virtual machine, such that you can test on an installed instead of a portable copy version, while not messing up your main system with this testing? Joseph, anyone else: is there a (possibly more cumbersome) way to perform testing on an installed version while keeping at all times a possibility to revert back to a stable/clean situation? (e.g., having a .bat script that swaps configuration file and add-on directories between stable and testing versions and that can easily be executed in between exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then to regularly take snapshot copies of your portable copy such that when degradation takes place a diff between stable and degraded version can be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions are very unlikely.
3) further testing/investigation is needed in order to support/dismiss certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive.  So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced.  You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening?  I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't.  Deleting the portable copy and making a new one
will clear it up.  I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version.  As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger








.


Re: Portable version degrading

 

Hi,
Config files and log files are not the only thing NVDA will need to access from disks. Others include various DLL files, one or more executables and what not.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Didier Colle
Sent: Friday, January 19, 2018 5:47 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph,


On 20/01/2018 1:21, Joseph Lee wrote:
Hi,
Fragmentation will happen as long as new information is written in places that'll cause problems for fast reading later. Also, while something is running, the operating system will still need to access things on disk if asked by the program.
sure, of course.

Question:
Roger already answered that even with add-ons disabled sluggishness and weird behavior remains.
Thus what does nvda core (or at least non add-on related code) ask the OS to access on disc while being executed? Screen reading, the main function of nvda, seems to me to have very little to do with disc access (except writing logs and reading configuration (which is probably loaded in RAM anyway).

Kind regards,

Didier

As for swapping configurations: in theory, yes as long as the versions are compatible enough to not cause visible side effects. For example, if one swaps configurations between stable and next branches, that could raise problems in that some things required by next snapshots might not be present.
As for the add-on being the culprit: could be. One thing to try though: what if Roger runs his portable copy with all add-ons disabled? If that improves performance, then it could be an add-on, if not, we should try something else.
Implicating file systems: Roger did say this is an internal drive, hence I put more weight on possible fragmentation and data movement issues.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
Didier Colle
Sent: Friday, January 19, 2018 3:40 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug
while it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more
concrete symptom description is needed? (how does such "would be" bug
manifestate itself? Is it always the same "would be" bug or do many
"would be" bugs appear randomly? when do such "would be" bugs appear
(during loading, during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself
(or that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would
be the root cause, something should be found in the logs as loading
the relevant python modules should throw an exception (if these
exceptions are not logged, it should be possible to do so). Therefore,
I dismiss storage medium/filesystem corruption as root cause of the
above mentioned "would be" bugs (assuming bugs have to be interpreted
as broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness
with extremely long / infinite response times? Or do you get errors? or ...
Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a second?
A few seconds? A minute or more? When does sluggishness happen: at
time of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of
Roger's
issue: a possible contributing factor is constant add-on updates. He
uses an add-on that is updated on a regular basis, .. ..., potentially
fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be
a root cause of sluggishness, but only when access to storage medium
is needed and not during general execution which typically takes place
from RAM rather then from disc. Therefore, fragmentation issues appear
very unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies.
However, there seems to be nobody else experiencing the same problem.
Thus, I would translate this into the following question that you
would need to test/investigage further: is there a conflict between
the portable copies and your specific system setup, or is the issue
caused by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on
testing to the installed version: I guess you are using the portable
version for this purpose, exactly to avoid messing up the installed
version. Would you have the possibility to do the testing in for
example a virtual machine, such that you can test on an installed
instead of a portable copy version, while not messing up your main system with this testing?
Joseph, anyone else: is there a (possibly more cumbersome) way to
perform testing on an installed version while keeping at all times a
possibility to revert back to a stable/clean situation? (e.g., having
a .bat script that swaps configuration file and add-on directories
between stable and testing versions and that can easily be executed in
between exiting nvda and restarting it?) In case none of the above
options is tried, my suggestions would be then to regularly take
snapshot copies of your portable copy such that when degradation takes
place a diff between stable and degraded version can be taken and
investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium
problems/corruptions are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard
drive, it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the
file system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is
written repeatedly, as flash memory has limited endurance when it
comes to data reads and writes.
In case of Roger's issue: a possible contributing factor is constant
add-on updates. He uses an add-on that is updated on a regular basis,
putting strain on part of the drive where the add-on bits are stored.
Thus, some drive sectors are repeatedly bombarded with new
information, and one way operating systems will do in this case is
move the new data somewhere else on the drive, potentially
fragmenting bits of files (I'll explain in a moment). Thus one
solution is to not test all add-on updates, but that's a bit risky as
Roger is one of the key testers for this add-on I'm talking about.
Regarding fragmentation and what not: the following is a bit geeky
but I believe you should know about how some parts of a file system
(an in extension, operating systems) works, because I believe it'll
help folks better understand what might be going on:
Storage devices encountered in the wild are typically organized into
many parts, typically into blocks of fixed-length units called
"sectors". A sector is smallest unit of information that the storage
device can present to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive
(HDD) and when you wish to open it in Notepad, Windows will ask a
module that's in charge of organizing and interpreting data on a
drive (called a file system) to locate the sector where the document
(or magnets or flash cells that constitute the document data) is
stored and bring it out to you. To you, all you see is the path to
the document, but the file system will ask the drive controller (a
small computer inside hard disks and other storage devices) to fetch
data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve
waiting for a platter with desired sector to come to the attention of
a read/write head (a thin magnetic sensor used to detect or make
changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and
solid-state drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a
storage device, the file system in charge of the device will ask the
drive controller to either find a spot on a disk filled with magnets
and change some magnets, or apply heat pressure to dislodge all cells
on a block, erase the block, add new things, and fill the empty block
with modified data (including old bits). You can imagine how tedious
this can get, but as far as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage
device. The file system will repeatedly ask the drive hardware to
fetch data from specific regions, and will look for new locations to
store changes. On a hard drive, because there are limited number of
heads and it'll take a while for desired magnetic region to come to
attention of one, read speed is slow, hence increased latency
(latency refers to how long you have to wait for something to
happen). When it comes to saving things to HDD's, all the drive needs
to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems (rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up
the address (or sector) where the electrons comprising the data you
want is saved (akin to walking down a street grid), so no need to
wait for a sensor to wait for something to happen. This is the reason
why solid-state drives appear to respond fast when reading something.
On the other hand, writing or injecting electrons is very slow
because the drive needs to erase the entire block before writing new
data. In other words, just changing a letter in a document and saving
it to an SSD involves a lot of work, hence SSD's are slower when it
comes to writing new things, but because of the underlying technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to
some extent. If data is written to a drive, the drive controller will
process whatever it comes along its path. But file systems won't let
drive controllers get away with that: file systems such as NTFS (New
Technology File System) will schedule data writes so it'll have
minimal impact on the lifespan of a storage device. For hard disks,
it'll try its best to tell the drive to store file data in
consecutive locations in one big batch, but that doesn't always work.
For SSD's, the file system will ask the drive to storage new
information in different cells so all regions can be used equally (at
least for storing new information; this is called ware leveling). One
way to speed things up is asking the drive to reorganize data so file
fragments can be found in consecutive sectors or trim deleted regions
so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe
you need to know some things about how things work. It is also a
personal exercise to refresh my memory on certain computer science
topics (I majored in it not long ago, and my interests were mostly
hardware and operating systems, hence I was sort of naturally drawn
to screen reader internals and how it interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
Roger Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an
internal hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and
there's no file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise
there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they
replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of
them
have
plastic caps to put over the contacts or a slider to retract the
contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other
mistreated storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought
me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing
from the computer or they get corrupted. They physically fall apart
easily. And, the hardware inside seems to be more vulnerable to
static electricity
data
loss than other portable drives, certainly more vulnerable than
most computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of
nvda will sometimes degrade or get a bit corrupted over time all by
itself while the installed version is always stable as a rock.
Does anyone know why this is and is there any way to prevent this
from happening? I use the portable copy to test a couple add ons
and if the portable version corrupts, it can make it appear that
the add on is defective or has a bug while it really doesn't.
Deleting the portable copy and making a new one will clear it up.
I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all
gets
back
to normal after a complete flush and remake of the portable
version. As
I
say, this never has happened at all with my installed copy on the
same computer.





Roger









Re: Portable version degrading

Didier Colle
 

Dear Joseph,


On 20/01/2018 1:21, Joseph Lee wrote:
Hi,
Fragmentation will happen as long as new information is written in places that'll cause problems for fast reading later. Also, while something is running, the operating system will still need to access things on disk if asked by the program.
sure, of course.

Question:
Roger already answered that even with add-ons disabled sluggishness and weird behavior remains.
Thus what does nvda core (or at least non add-on related code) ask the OS to access on disc while being executed? Screen reading, the main function of nvda, seems to me to have very little to do with disc access (except writing logs and reading configuration (which is probably loaded in RAM anyway).

Kind regards,

Didier

As for swapping configurations: in theory, yes as long as the versions are compatible enough to not cause visible side effects. For example, if one swaps configurations between stable and next branches, that could raise problems in that some things required by next snapshots might not be present.
As for the add-on being the culprit: could be. One thing to try though: what if Roger runs his portable copy with all add-ons disabled? If that improves performance, then it could be an add-on, if not, we should try something else.
Implicating file systems: Roger did say this is an internal drive, hence I put more weight on possible fragmentation and data movement issues.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Didier Colle
Sent: Friday, January 19, 2018 3:40 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while
it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete
symptom description is needed? (how does such "would be" bug manifestate
itself? Is it always the same "would be" bug or do many "would be" bugs
appear randomly? when do such "would be" bugs appear (during loading,
during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself (or
that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would be
the root cause, something should be found in the logs as loading the
relevant python modules should throw an exception (if these exceptions
are not logged, it should be possible to do so). Therefore, I dismiss
storage medium/filesystem corruption as root cause of the above
mentioned "would be" bugs (assuming bugs have to be interpreted as
broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness with
extremely long / infinite response times? Or do you get errors? or ...
Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a second?
A few seconds? A minute or more? When does sluggishness happen: at time
of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's
issue: a possible contributing factor is constant add-on updates. He
uses an add-on that is updated on a regular basis, .. ..., potentially
fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a
root cause of sluggishness, but only when access to storage medium is
needed and not during general execution which typically takes place from
RAM rather then from disc. Therefore, fragmentation issues appear very
unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies.
However, there seems to be nobody else experiencing the same problem.
Thus, I would translate this into the following question that you would
need to test/investigage further: is there a conflict between the
portable copies and your specific system setup, or is the issue caused
by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on testing
to the installed version: I guess you are using the portable version for
this purpose, exactly to avoid messing up the installed version. Would
you have the possibility to do the testing in for example a virtual
machine, such that you can test on an installed instead of a portable
copy version, while not messing up your main system with this testing?
Joseph, anyone else: is there a (possibly more cumbersome) way to
perform testing on an installed version while keeping at all times a
possibility to revert back to a stable/clean situation? (e.g., having a
.bat script that swaps configuration file and add-on directories between
stable and testing versions and that can easily be executed in between
exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then
to regularly take snapshot copies of your portable copy such that when
degradation takes place a diff between stable and degraded version can
be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions
are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger








Re: Portable version degrading

 

Hi,
No, I don't have Mac computers.
As for the add-on in question: there are some things under active development, not just updating to newer Python versions. One in particular is in response to recent low-level work from NVDA.
For those curious, the add-on Roger refers to is StationPlaylist Studio. Unlike many other add-ons, this add-on does support add-on update feature, hence Roger's comments about testing snapshot builds. There is another add-on that shares a similar trait, and both of these add-ons use the same code for update check facility (because I am the author of the add-on update code, which provides the foundation for providing add-on update directly from NVDA in the future). In a way, I'm using my add-ons (either ones I've created or maintain) to test potential candidate features for NVDA Core, and add-on updates is one of them.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger Stewart
Sent: Friday, January 19, 2018 5:03 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Boy, most of this is way over my head! However, I did try running it with add ons disabled and the sluggishness still occurred. By this I mean that when I type a letter, it may take a second or so for it to be voiced while this is usually instantaneous. Also, some functions like hitting the nvda key to either invoke the menu or quit nvda just won't work at all and I need to shut it down with my Pview utility. Most of the other stuff I don't understand at all. I know Joseph runs something called a virtual machine but it is on his Mac I think and I don't have a Mac here so I probably can't do this. If this is a built in function of Windows, I'm not aware of it nor how to use it. Also, I always run my portable copy on my F drive which is a mechanical hard drive. My main drive is an SSD, so I don't want to update anything there any more often than necessary. I use the portable version to test one particular add on, but this add on is nearly fully matured now and so should need no further testing except for the version for the new Python version to make sure everything is working fine.

Roger









On 1/19/2018 5:40 PM, Didier Colle wrote:
Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your
suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug
while it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more
concrete symptom description is needed? (how does such "would be" bug
manifestate itself? Is it always the same "would be" bug or do many
"would be" bugs appear randomly? when do such "would be" bugs appear
(during loading, during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself
(or that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would
be the root cause, something should be found in the logs as loading
the relevant python modules should throw an exception (if these
exceptions are not logged, it should be possible to do so). Therefore,
I dismiss storage medium/filesystem corruption as root cause of the
above mentioned "would be" bugs (assuming bugs have to be interpreted
as broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness
with extremely long / infinite response times? Or do you get errors?
or ... Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a
second? A few seconds? A minute or more? When does sluggishness
happen: at time of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of
Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, .. ...,
potentially fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be
a root cause of sluggishness, but only when access to storage medium
is needed and not during general execution which typically takes place
from RAM rather then from disc. Therefore, fragmentation issues appear
very unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable
copies. However, there seems to be nobody else experiencing the same
problem. Thus, I would translate this into the following question that
you would need to test/investigage further: is there a conflict
between the portable copies and your specific system setup, or is the
issue caused by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on
testing to the installed version: I guess you are using the portable
version for this purpose, exactly to avoid messing up the installed
version. Would you have the possibility to do the testing in for
example a virtual machine, such that you can test on an installed
instead of a portable copy version, while not messing up your main
system with this testing? Joseph, anyone else: is there a (possibly
more cumbersome) way to perform testing on an installed version while
keeping at all times a possibility to revert back to a stable/clean
situation? (e.g., having a .bat script that swaps configuration file
and add-on directories between stable and testing versions and that
can easily be executed in between exiting nvda and restarting it?) In
case none of the above options is tried, my suggestions would be then
to regularly take snapshot copies of your portable copy such that when
degradation takes place a diff between stable and degraded version can
be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium
problems/corruptions are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard
drive, it'll degrade as data moves around, creating the need for
defragmentation.
This is especially the case when data is repeatedly written and the
file system is asked to find new locations to hold the constantly
changing data.
In case of solid-state drives, it'll degrade if the same region is
written repeatedly, as flash memory has limited endurance when it
comes to data reads and writes.
In case of Roger's issue: a possible contributing factor is constant
add-on updates. He uses an add-on that is updated on a regular basis,
putting strain on part of the drive where the add-on bits are stored.
Thus, some drive sectors are repeatedly bombarded with new
information, and one way operating systems will do in this case is
move the new data somewhere else on the drive, potentially
fragmenting bits of files (I'll explain in a moment). Thus one
solution is to not test all add-on updates, but that's a bit risky as
Roger is one of the key testers for this add-on I'm talking about.
Regarding fragmentation and what not: the following is a bit geeky
but I believe you should know about how some parts of a file system
(an in extension, operating systems) works, because I believe it'll
help folks better understand what might be going on:
Storage devices encountered in the wild are typically organized into
many parts, typically into blocks of fixed-length units called
"sectors". A sector is smallest unit of information that the storage
device can present to the outside world, as in how much data can be
held on a storage device.
For example, when you store a small document on a hard disk drive
(HDD) and
when you wish to open it in Notepad, Windows will ask a module that's
in charge of organizing and interpreting data on a drive (called a
file
system)
to locate the sector where the document (or magnets or flash cells
that constitute the document data) is stored and bring it out to you.
To you, all you see is the path to the document, but the file system
will ask the drive controller (a small computer inside hard disks and
other storage
devices) to
fetch data in a particular sector or region. Depending on what kind
of storage medium you're dealing with, reading from disks may involve
waiting for a platter with desired sector to come to the attention of
a read/write head (a thin magnetic sensor used to detect or make
changes to magnetic
fields) or peering inside windows and extracting electrons trapped
within.
This last sentence is a vivid description of how hard disks and
solid-state drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a
storage device, the file system in charge of the device will ask the
drive controller to either find a spot on a disk filled with magnets
and change some magnets, or apply heat pressure to dislodge all cells
on a block, erase the block, add new things, and fill the empty block
with modified data (including old bits). You can imagine how tedious
this can get, but as far as your work is concerned, it is safe and
sound.
Now imagine you wish to read and write repeatedly on a storage
device. The file system will repeatedly ask the drive hardware to
fetch data from specific regions, and will look for new locations to
store changes. On a hard drive, because there are limited number of
heads and it'll take a while for desired magnetic region to come to
attention of one, read speed is slow, hence increased latency
(latency refers to how long you have to wait for something to
happen). When it comes to saving things to HDD's, all the drive needs
to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating
systems (rather, file systems) are smarter than that, as we'll see
below.
In case of solid-state drives, reading data is simple as looking up
the address (or sector) where the electrons comprising the data you
want is saved (akin to walking down a street grid), so no need to
wait for a sensor to wait for something to happen. This is the reason
why solid-state drives appear to respond fast when reading something.
On the other hand, writing or injecting electrons is very slow
because the drive needs to erase the entire block before writing new
data. In other words, just changing a letter in a document and saving
it to an SSD involves a lot of work, hence SSD's are slower when it
comes to writing new things, but because of the underlying technology
in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to
some extent. If data is written to a drive, the drive controller will
process whatever it comes along its path. But file systems won't let
drive controllers get away with that: file systems such as NTFS (New
Technology File System) will schedule data writes so it'll have
minimal impact on the lifespan of a storage device. For hard disks,
it'll try its best to tell the drive to store file data in
consecutive locations in one big batch, but that doesn't always work.
For SSD's, the file system will ask the drive to storage new
information in different cells so all regions can be used equally (at
least for storing new information; this is called ware leveling). One
way to speed things up is asking the drive to reorganize data so file
fragments can be found in consecutive sectors or trim deleted regions
so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and
produce bad results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe
you need to know some things about how things work. It is also a
personal exercise to refresh my memory on certain computer science
topics (I majored in it not long ago, and my interests were mostly
hardware and operating systems, hence I was sort of naturally drawn
to screen reader internals and how it interacts with system
software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
Roger Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an
internal hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and
there's no file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise
there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they
replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the
contacts
into
the body.

My experience is that if they're treated reasonably they work very
well.
If
they're mistreated they'll give as many problems as any other
mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing
from the
computer or they get corrupted. They physically fall apart easily.
And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of
nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone
know
why this is and is there any way to prevent this from happening? I
use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has
a bug
while it really doesn't. Deleting the portable copy and making a
new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable
version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger








.


Re: Portable version degrading

Adriani Botez
 

What kind of storage drive do you use? Is it USB 3.1, 3.0 or 2.0? I have found out that the portable version runs more smoothly on a usb 3 drive than usb 2. Then the next question is, which storage drive specifications has your drive? For example company, model, read an write speed. I use a stick with quite high speed and I don‘t feel any difference compared to my installed version.

Best
Adriani
Von meinem iPhone gesendet

Am 20.01.2018 um 01:51 schrieb Roger Stewart <paganus2@gmail.com>:

I did try running it with all ad ons disabled and it was still sluggish and acting weird so it wasn't an add on causing the trouble.

Roger











On 1/19/2018 6:21 PM, Joseph Lee wrote:
Hi,
Fragmentation will happen as long as new information is written in places that'll cause problems for fast reading later. Also, while something is running, the operating system will still need to access things on disk if asked by the program.
As for swapping configurations: in theory, yes as long as the versions are compatible enough to not cause visible side effects. For example, if one swaps configurations between stable and next branches, that could raise problems in that some things required by next snapshots might not be present.
As for the add-on being the culprit: could be. One thing to try though: what if Roger runs his portable copy with all add-ons disabled? If that improves performance, then it could be an add-on, if not, we should try something else.
Implicating file systems: Roger did say this is an internal drive, hence I put more weight on possible fragmentation and data movement issues.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Didier Colle
Sent: Friday, January 19, 2018 3:40 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while
it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete
symptom description is needed? (how does such "would be" bug manifestate
itself? Is it always the same "would be" bug or do many "would be" bugs
appear randomly? when do such "would be" bugs appear (during loading,
during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself (or
that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would be
the root cause, something should be found in the logs as loading the
relevant python modules should throw an exception (if these exceptions
are not logged, it should be possible to do so). Therefore, I dismiss
storage medium/filesystem corruption as root cause of the above
mentioned "would be" bugs (assuming bugs have to be interpreted as
broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness with
extremely long / infinite response times? Or do you get errors? or ...
Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a second?
A few seconds? A minute or more? When does sluggishness happen: at time
of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's
issue: a possible contributing factor is constant add-on updates. He
uses an add-on that is updated on a regular basis, .. ..., potentially
fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a
root cause of sluggishness, but only when access to storage medium is
needed and not during general execution which typically takes place from
RAM rather then from disc. Therefore, fragmentation issues appear very
unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies.
However, there seems to be nobody else experiencing the same problem.
Thus, I would translate this into the following question that you would
need to test/investigage further: is there a conflict between the
portable copies and your specific system setup, or is the issue caused
by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on testing
to the installed version: I guess you are using the portable version for
this purpose, exactly to avoid messing up the installed version. Would
you have the possibility to do the testing in for example a virtual
machine, such that you can test on an installed instead of a portable
copy version, while not messing up your main system with this testing?
Joseph, anyone else: is there a (possibly more cumbersome) way to
perform testing on an installed version while keeping at all times a
possibility to revert back to a stable/clean situation? (e.g., having a
.bat script that swaps configuration file and add-on directories between
stable and testing versions and that can easily be executed in between
exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then
to regularly take snapshot copies of your portable copy such that when
degradation takes place a diff between stable and degraded version can
be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions
are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger











Re: Portable version degrading

Roger Stewart
 

Boy, most of this is way over my head! However, I did try running it with add ons disabled and the sluggishness still occurred. By this I mean that when I type a letter, it may take a second or so for it to be voiced while this is usually instantaneous. Also, some functions like hitting the nvda key to either invoke the menu or quit nvda just won't work at all and I need to shut it down with my Pview utility. Most of the other stuff I don't understand at all. I know Joseph runs something called a virtual machine but it is on his Mac I think and I don't have a Mac here so I probably can't do this. If this is a built in function of Windows, I'm not aware of it nor how to use it. Also, I always run my portable copy on my F drive which is a mechanical hard drive. My main drive is an SSD, so I don't want to update anything there any more often than necessary. I use the portable version to test one particular add on, but this add on is nearly fully matured now and so should need no further testing except for the version for the new Python version to make sure everything is working fine.

Roger

On 1/19/2018 5:40 PM, Didier Colle wrote:
Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete symptom description is needed? (how does such "would be" bug manifestate itself? Is it always the same "would be" bug or do many "would be" bugs appear randomly? when do such "would be" bugs appear (during loading, during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the physical/electronic/magnetic integrity of the storage medium itself (or that the filesystem has set them aside such that they are not used anymore). In case corrupted/broken blocks on the storage medium would be the root cause, something should be found in the logs as loading the relevant python modules should throw an exception (if these exceptions are not logged, it should be possible to do so). Therefore, I dismiss storage medium/filesystem corruption as root cause of the above mentioned "would be" bugs (assuming bugs have to be interpreted as broken functionality).


* "I also notice a few functions of nvda either don't work at all or nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete symptiom description. What functions are you exactly speaking about? What does "not work at all" exactly mean: do you mean sluggishness with extremely long / infinite response times? Or do you get errors? or ... Is the sluggishness general or does it happen in those specific functions? What do you mean by sluggishness: response in only a second? A few seconds? A minute or more? When does sluggishness happen: at time of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's issue: a possible contributing factor is constant add-on updates. He uses an add-on that is updated on a regular basis, .. ..., potentially fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a root cause of sluggishness, but only when access to storage medium is needed and not during general execution which typically takes place from RAM rather then from disc. Therefore, fragmentation issues appear very unlikely to me.

* "while the installed version is always stable as a rock." and "I use the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies. However, there seems to be nobody else experiencing the same problem. Thus, I would translate this into the following question that you would need to test/investigage further: is there a conflict between the portable copies and your specific system setup, or is the issue caused by the add-ons under test?
To test the former possibility, why not using a fresh portable copy replicating the setup of your installed version instead of that installed version for a while?
To test the latter that would probably require moving the add-on testing to the installed version: I guess you are using the portable version for this purpose, exactly to avoid messing up the installed version. Would you have the possibility to do the testing in for example a virtual machine, such that you can test on an installed instead of a portable copy version, while not messing up your main system with this testing? Joseph, anyone else: is there a (possibly more cumbersome) way to perform testing on an installed version while keeping at all times a possibility to revert back to a stable/clean situation? (e.g., having a .bat script that swaps configuration file and add-on directories between stable and testing versions and that can easily be executed in between exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then to regularly take snapshot copies of your portable copy such that when degradation takes place a diff between stable and degraded version can be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions are very unlikely.
3) further testing/investigation is needed in order to support/dismiss certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger







.


Re: Portable version degrading

Roger Stewart
 

I did try running it with all ad ons disabled and it was still sluggish and acting weird so it wasn't an add on causing the trouble.

Roger

On 1/19/2018 6:21 PM, Joseph Lee wrote:
Hi,
Fragmentation will happen as long as new information is written in places that'll cause problems for fast reading later. Also, while something is running, the operating system will still need to access things on disk if asked by the program.
As for swapping configurations: in theory, yes as long as the versions are compatible enough to not cause visible side effects. For example, if one swaps configurations between stable and next branches, that could raise problems in that some things required by next snapshots might not be present.
As for the add-on being the culprit: could be. One thing to try though: what if Roger runs his portable copy with all add-ons disabled? If that improves performance, then it could be an add-on, if not, we should try something else.
Implicating file systems: Roger did say this is an internal drive, hence I put more weight on possible fragmentation and data movement issues.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Didier Colle
Sent: Friday, January 19, 2018 3:40 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while
it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete
symptom description is needed? (how does such "would be" bug manifestate
itself? Is it always the same "would be" bug or do many "would be" bugs
appear randomly? when do such "would be" bugs appear (during loading,
during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself (or
that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would be
the root cause, something should be found in the logs as loading the
relevant python modules should throw an exception (if these exceptions
are not logged, it should be possible to do so). Therefore, I dismiss
storage medium/filesystem corruption as root cause of the above
mentioned "would be" bugs (assuming bugs have to be interpreted as
broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness with
extremely long / infinite response times? Or do you get errors? or ...
Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a second?
A few seconds? A minute or more? When does sluggishness happen: at time
of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's
issue: a possible contributing factor is constant add-on updates. He
uses an add-on that is updated on a regular basis, .. ..., potentially
fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a
root cause of sluggishness, but only when access to storage medium is
needed and not during general execution which typically takes place from
RAM rather then from disc. Therefore, fragmentation issues appear very
unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies.
However, there seems to be nobody else experiencing the same problem.
Thus, I would translate this into the following question that you would
need to test/investigage further: is there a conflict between the
portable copies and your specific system setup, or is the issue caused
by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on testing
to the installed version: I guess you are using the portable version for
this purpose, exactly to avoid messing up the installed version. Would
you have the possibility to do the testing in for example a virtual
machine, such that you can test on an installed instead of a portable
copy version, while not messing up your main system with this testing?
Joseph, anyone else: is there a (possibly more cumbersome) way to
perform testing on an installed version while keeping at all times a
possibility to revert back to a stable/clean situation? (e.g., having a
.bat script that swaps configuration file and add-on directories between
stable and testing versions and that can easily be executed in between
exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then
to regularly take snapshot copies of your portable copy such that when
degradation takes place a diff between stable and degraded version can
be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions
are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger









Re: Portable version degrading

 

Hi,
Fragmentation will happen as long as new information is written in places that'll cause problems for fast reading later. Also, while something is running, the operating system will still need to access things on disk if asked by the program.
As for swapping configurations: in theory, yes as long as the versions are compatible enough to not cause visible side effects. For example, if one swaps configurations between stable and next branches, that could raise problems in that some things required by next snapshots might not be present.
As for the add-on being the culprit: could be. One thing to try though: what if Roger runs his portable copy with all add-ons disabled? If that improves performance, then it could be an add-on, if not, we should try something else.
Implicating file systems: Roger did say this is an internal drive, hence I put more weight on possible fragmentation and data movement issues.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Didier Colle
Sent: Friday, January 19, 2018 3:40 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while
it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete
symptom description is needed? (how does such "would be" bug manifestate
itself? Is it always the same "would be" bug or do many "would be" bugs
appear randomly? when do such "would be" bugs appear (during loading,
during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the
physical/electronic/magnetic integrity of the storage medium itself (or
that the filesystem has set them aside such that they are not used
anymore). In case corrupted/broken blocks on the storage medium would be
the root cause, something should be found in the logs as loading the
relevant python modules should throw an exception (if these exceptions
are not logged, it should be possible to do so). Therefore, I dismiss
storage medium/filesystem corruption as root cause of the above
mentioned "would be" bugs (assuming bugs have to be interpreted as
broken functionality).


* "I also notice a few functions of nvda either don't work at all or
nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete
symptiom description. What functions are you exactly speaking about?
What does "not work at all" exactly mean: do you mean sluggishness with
extremely long / infinite response times? Or do you get errors? or ...
Is the sluggishness general or does it happen in those specific
functions? What do you mean by sluggishness: response in only a second?
A few seconds? A minute or more? When does sluggishness happen: at time
of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's
issue: a possible contributing factor is constant add-on updates. He
uses an add-on that is updated on a regular basis, .. ..., potentially
fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a
root cause of sluggishness, but only when access to storage medium is
needed and not during general execution which typically takes place from
RAM rather then from disc. Therefore, fragmentation issues appear very
unlikely to me.

* "while the installed version is always stable as a rock." and "I use
the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it
take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies.
However, there seems to be nobody else experiencing the same problem.
Thus, I would translate this into the following question that you would
need to test/investigage further: is there a conflict between the
portable copies and your specific system setup, or is the issue caused
by the add-ons under test?
To test the former possibility, why not using a fresh portable copy
replicating the setup of your installed version instead of that
installed version for a while?
To test the latter that would probably require moving the add-on testing
to the installed version: I guess you are using the portable version for
this purpose, exactly to avoid messing up the installed version. Would
you have the possibility to do the testing in for example a virtual
machine, such that you can test on an installed instead of a portable
copy version, while not messing up your main system with this testing?
Joseph, anyone else: is there a (possibly more cumbersome) way to
perform testing on an installed version while keeping at all times a
possibility to revert back to a stable/clean situation? (e.g., having a
.bat script that swaps configuration file and add-on directories between
stable and testing versions and that can easily be executed in between
exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then
to regularly take snapshot copies of your portable copy such that when
degradation takes place a diff between stable and degraded version can
be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed
before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions
are very unlikely.
3) further testing/investigation is needed in order to support/dismiss
certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger






Re: vocalizer voices and nvda

Rui Fontes
 

Ciao!

If you are refering to Tiflotecnia's voices, the download page is:
https://vocalizer-nvda.com/downloads

Rui Fontes
Tiflotecnia, Lda.


Às 06:27 de 19/01/2018, Angela Delicata escreveu:

Hi,
How can I get nvda  Italian Alice?
You can also contact me at: angela.delicata1@gmail.com.
Thank you.
Angela from Italy


Re: Portable version degrading

Didier Colle
 

Dear Joseph, roger, all,


@Joseph: not sure to understand what point you try to make. Is your suggestion there is indeed a filesystem problem as the root cause?


trying to recapitulate a few things:

* "it can make it appear that the add on is defective or has a bug while it really doesn't."

@Roger: for any further meaningful diagnosis, I believe a more concrete symptom description is needed? (how does such "would be" bug manifestate itself? Is it always the same "would be" bug or do many "would be" bugs appear randomly? when do such "would be" bugs appear (during loading, during execution of the add-on)?)

* "there's no file system errors"

I guess that means there are no issues with the physical/electronic/magnetic integrity of the storage medium itself (or that the filesystem has set them aside such that they are not used anymore). In case corrupted/broken blocks on the storage medium would be the root cause, something should be found in the logs as loading the relevant python modules should throw an exception (if these exceptions are not logged, it should be possible to do so). Therefore, I dismiss storage medium/filesystem corruption as root cause of the above mentioned "would be" bugs (assuming bugs have to be interpreted as broken functionality).


* "I also notice a few functions of nvda either don't work at all or nvda gets very sluggish in responsiveness"
@Roger: again, for any meaningfull diagnosis, provide a more concrete symptiom description. What functions are you exactly speaking about? What does "not work at all" exactly mean: do you mean sluggishness with extremely long / infinite response times? Or do you get errors? or ... Is the sluggishness general or does it happen in those specific functions? What do you mean by sluggishness: response in only a second? A few seconds? A minute or more? When does sluggishness happen: at time of loading add-on/modules or continuously or ...?
* "... nor any fragmenting.". Statement from Joseph: "In case of Roger's issue: a possible contributing factor is constant add-on updates. He uses an add-on that is updated on a regular basis, .. ..., potentially fragmenting bits of files ..."
The two statements appear to me as contradictory. Fragmentation may be a root cause of sluggishness, but only when access to storage medium is needed and not during general execution which typically takes place from RAM rather then from disc. Therefore, fragmentation issues appear very unlikely to me.

* "while the installed version is always stable as a rock." and "I use the portable copy to test a couple add ons"
@Roger: how much do you use one and the other? How much usage does it take before the portable copy gets degraded?
The two statements suggest there is a problem with the portable copies. However, there seems to be nobody else experiencing the same problem. Thus, I would translate this into the following question that you would need to test/investigage further: is there a conflict between the portable copies and your specific system setup, or is the issue caused by the add-ons under test?
To test the former possibility, why not using a fresh portable copy replicating the setup of your installed version instead of that installed version for a while?
To test the latter that would probably require moving the add-on testing to the installed version: I guess you are using the portable version for this purpose, exactly to avoid messing up the installed version. Would you have the possibility to do the testing in for example a virtual machine, such that you can test on an installed instead of a portable copy version, while not messing up your main system with this testing? Joseph, anyone else: is there a (possibly more cumbersome) way to perform testing on an installed version while keeping at all times a possibility to revert back to a stable/clean situation? (e.g., having a .bat script that swaps configuration file and add-on directories between stable and testing versions and that can easily be executed in between exiting nvda and restarting it?)
In case none of the above options is tried, my suggestions would be then to regularly take snapshot copies of your portable copy such that when degradation takes place a diff between stable and degraded version can be taken and investigated.

In summary, I believe:
1) a much more concrete/detailed/... symptom description is needed before any meaningful statements regarding diagnosis is possible;
2) with the info I have, filesystem/storage medium problems/corruptions are very unlikely.
3) further testing/investigation is needed in order to support/dismiss certain hypotheses.

Kind regards,

Didier

On 19/01/2018 18:19, Joseph Lee wrote:
Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced. You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version. As
I
say, this never has happened at all with my installed copy on the same
computer.





Roger





Re: Portable version degrading

ely.r@...
 

Well,
At least Lecturer Joseph, appreciated for his clarity.
Rick

Dr. Rick Ely
TVI, Vision Consultant
451 Rocky Hill Road
Florence, MA 01062
&413() 727-3038

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Joseph
Lee
Sent: Friday, January 19, 2018 3:54 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Hi,
Defragmentation won't extend drive life. Instead, it makes it a bit faster
for drives to return data you need. For hard drives, once the required
region gets the attention of a read/write head, the head will read
continuously unless told to stop. Ideally, the head can (and should) do this
in one sitting, but sometimes it'll go "mad" if trying to locate fragmented
information all over the drive, particularly if you've edited files and the
drive decided to store the new bits somewhere else.
As for automatic defragmentation, it is controlled by a setting in Optimize
Drives.
As for accessing related data: depends. If the data you need (or a group of
them) is located next to each other, then yes, otherwise the drive will be
searched to locate fragments.
As for calling me a "professor": I don't deserve this title (I don't have a
doctorate in NVDA, let alone computer science or communication studies). Nor
it isn't the first time someone gave me this nickname: many years ago, I was
called "professor" as I seem to know everything about HumanWare BrailleNote,
with some BrailleNote users commenting that I am a Humanware staff when I
was just a high school senior. In terms of NVDA, let's just say that this is
just a small talk from a certified NVDA Expert; perhaps I should record a
tutorial set devoted to NVDA internals.
Cheers,
Joseph


-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
ely.r@comcast.net
Sent: Friday, January 19, 2018 12:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Professor Joseph,
So, first, I will be reading this lecture several time going forward.
However, you do a wonderful job of making geeky things reasonably
comprehensible. I think in part for me is your slight leaning towards
anthropomorphizing file systems, drives and even those little electrons.

One question has to do with defragmenting. Does the process help to increase
the accessing of related pieces of data? Second, does the process extend
drive life to any extent? Last, I promise, I thought I read at some point
that systems had made user invoked defragmentation unnecessary. Be that
true?
Rick the old English teacher who loves Inanimate objects that come to life

Dr. Rick Ely
TVI, Vision Consultant
451 Rocky Hill Road
Florence, MA 01062
&413() 727-3038

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Joseph
Lee
Sent: Friday, January 19, 2018 12:19 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise
there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced.
You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the
contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other
mistreated storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from
the computer or they get corrupted. They physically fall apart
easily. And, the hardware inside seems to be more vulnerable to
static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of
nvda will sometimes degrade or get a bit corrupted over time all by
itself while the installed version is always stable as a rock. Does
anyone know why this is and is there any way to prevent this from
happening? I use the portable copy to test a couple add ons and if
the portable version corrupts, it can make it appear that the add on
is defective or has a bug while it really doesn't. Deleting the
portable copy and making a new one will clear it up. I also notice a
few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version.
As
I
say, this never has happened at all with my installed copy on the
same computer.





Roger


Re: vocalizer voices and nvda

John Isige
 

On 1/19/2018 0:27, Angela Delicata wrote:
Hi,

How can I get nvda  Italian Alice?

You can also contact me at: angela.delicata1@gmail.com.

Thank you.

Angela from Italy



vocalizer voices and nvda

Angela Delicata
 

Hi,

How can I get nvda  Italian Alice?

You can also contact me at: angela.delicata1@gmail.com.

Thank you.

Angela from Italy


Re: Portable version degrading

 

Hi,
Defragmentation won't extend drive life. Instead, it makes it a bit faster
for drives to return data you need. For hard drives, once the required
region gets the attention of a read/write head, the head will read
continuously unless told to stop. Ideally, the head can (and should) do this
in one sitting, but sometimes it'll go "mad" if trying to locate fragmented
information all over the drive, particularly if you've edited files and the
drive decided to store the new bits somewhere else.
As for automatic defragmentation, it is controlled by a setting in Optimize
Drives.
As for accessing related data: depends. If the data you need (or a group of
them) is located next to each other, then yes, otherwise the drive will be
searched to locate fragments.
As for calling me a "professor": I don't deserve this title (I don't have a
doctorate in NVDA, let alone computer science or communication studies). Nor
it isn't the first time someone gave me this nickname: many years ago, I was
called "professor" as I seem to know everything about HumanWare BrailleNote,
with some BrailleNote users commenting that I am a Humanware staff when I
was just a high school senior. In terms of NVDA, let's just say that this is
just a small talk from a certified NVDA Expert; perhaps I should record a
tutorial set devoted to NVDA internals.
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of
ely.r@comcast.net
Sent: Friday, January 19, 2018 12:34 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Professor Joseph,
So, first, I will be reading this lecture several time going forward.
However, you do a wonderful job of making geeky things reasonably
comprehensible. I think in part for me is your slight leaning towards
anthropomorphizing file systems, drives and even those little electrons.

One question has to do with defragmenting. Does the process help to increase
the accessing of related pieces of data? Second, does the process extend
drive life to any extent? Last, I promise, I thought I read at some point
that systems had made user invoked defragmentation unnecessary. Be that
true?
Rick the old English teacher who loves Inanimate objects that come to life

Dr. Rick Ely
TVI, Vision Consultant
451 Rocky Hill Road
Florence, MA 01062
&413() 727-3038

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Joseph
Lee
Sent: Friday, January 19, 2018 12:19 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise
there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced.
You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the
contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other
mistreated storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from
the computer or they get corrupted. They physically fall apart
easily. And, the hardware inside seems to be more vulnerable to
static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of
nvda will sometimes degrade or get a bit corrupted over time all by
itself while the installed version is always stable as a rock. Does
anyone know why this is and is there any way to prevent this from
happening? I use the portable copy to test a couple add ons and if
the portable version corrupts, it can make it appear that the add on
is defective or has a bug while it really doesn't. Deleting the
portable copy and making a new one will clear it up. I also notice a
few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version.
As
I
say, this never has happened at all with my installed copy on the
same computer.





Roger


Re: Portable version degrading

ely.r@...
 

Professor Joseph,
So, first, I will be reading this lecture several time going forward.
However, you do a wonderful job of making geeky things reasonably
comprehensible. I think in part for me is your slight leaning towards
anthropomorphizing file systems, drives and even those little electrons.

One question has to do with defragmenting. Does the process help to increase
the accessing of related pieces of data? Second, does the process extend
drive life to any extent? Last, I promise, I thought I read at some point
that systems had made user invoked defragmentation unnecessary. Be that
true?
Rick the old English teacher who loves Inanimate objects that come to life

Dr. Rick Ely
TVI, Vision Consultant
451 Rocky Hill Road
Florence, MA 01062
&413() 727-3038

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Joseph
Lee
Sent: Friday, January 19, 2018 12:19 PM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

Hi,
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
about.
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.

I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
Cheers,
Joseph

-----Original Message-----
From: nvda@nvda.groups.io [mailto:nvda@nvda.groups.io] On Behalf Of Roger
Stewart
Sent: Friday, January 19, 2018 7:58 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] Portable version degrading

The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise
there
is
the risk of file system corruption. Precisely the same is true for
external
hard drives, floppy disks, or any other writeable medium you can
temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're
considerably
more robust than floppy disks, which is basically what they replaced.
You
can
also drop them on the floor with a good deal more confidence of them
working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them
have
plastic caps to put over the contacts or a slider to retract the
contacts
into
the body.

My experience is that if they're treated reasonably they work very well.
If
they're mistreated they'll give as many problems as any other
mistreated storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me
their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from
the computer or they get corrupted. They physically fall apart
easily. And, the hardware inside seems to be more vulnerable to
static electricity
data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of
nvda will sometimes degrade or get a bit corrupted over time all by
itself while the installed version is always stable as a rock. Does
anyone know why this is and is there any way to prevent this from
happening? I use the portable copy to test a couple add ons and if
the portable version corrupts, it can make it appear that the add on
is defective or has a bug while it really doesn't. Deleting the
portable copy and making a new one will clear it up. I also notice a
few functions of nvda either don't
work
at all or nvda gets very sluggish in responsiveness and this all gets
back
to normal after a complete flush and remake of the portable version.
As
I
say, this never has happened at all with my installed copy on the
same computer.





Roger


Re: Rumola not compatible with my version of Firefox

Iván Novegil
 

I think that Firefox redessigned the extensions system in Firefox 57, and that’s the reason because Rumola is not compatible.
Install Firefox ESR (with features of 52.x and security fixes of the last release) or downgrade to Firefox 56 to have rumola working.

Regards.
Iván Novegil Cancelas
Editor
ivan.novegil@...



Comunidad hispanohablante de NVDA | Proyecto NVDA.es
www.NVDA.es
@nvda_es

Usuario do NVDA en galego

***Note que a anterior mensaxe e/ou os seus adxuntos pode conter información privada ou confidencial. O emprego da información desta mensaxe e/ou dos seus adxuntos está reservado ao ámbito privado do destinatario agás autorización do remitente. Se recibiu esta mensaxe por erro comuníqueo por esta mesma vía e destrúa a mensaxe.***

El 19 ene 2018, a las 20:49, Nevzat Adil <nevzatadil@...> escribió:

Hello Group,
I tried to get Rumola today but it says it's not compatible with my
version of Firefox which is 57.0.4.
Does anyone know what version is compatible?
Nevzat





Rumola not compatible with my version of Firefox

 

Hello Group,
I tried to get Rumola today but it says it's not compatible with my
version of Firefox which is 57.0.4.
Does anyone know what version is compatible?
Nevzat


]nvda] CHM, a.k.a. Microsoft Help files

Chris Mullins
 

Hi Tonea

I’ve just re-sent this as the original had incorrect information regarding moveing the mouse pointer and activating buttons using NVDA.  Note also the alt+o and alt+s commands referenced are Windows commands not NVDA.

 

I found a chm file on my machine and opened it.  On entry, I was placed in the Contents list and used arrow keys to traverse it.  When I found a topic I wanted help on I pressed f6.  This moved me to the help topic details.  I used arrow keys to move around the help topic window. From here I switched into screen review mode and located the following gbuttons

Hide Print Options Search Contents Favorites       

 

I could access these buttons via review cursor then use NVDA +Numpad / to  move the mouse pointer , then activated button using numpad / to simulate a mouse click.  I also found that alt+o opens a context menu containing the following:

Hide Tabs

Back

Forward

Home

Stop

Refresh

Internet Options...

Print...   

 

Some of these are probably equivalent to the header items you  could not access.  Alt+s also opened up the search function which was also accessible.

 

HTH

Chris

From: nvda@....] On Behalf Of tonea.ctr.morrow@...
Sent: 17 January 2018 15:05
To: nvda@nvda.groups.io
Subject: [nvda] CHM, a.k.a. Microsoft Help files

 

A while back, I asked for help with making my documentation equal access to NVDA users. James Austin And Cearbhall O’Meadhra both answered my call and have been very patient in helping me.

 

Microsoft help files end in a CHM extension. Just a refresher: this format generates a window with the following buttons along the top: Hide, Back, Forward, Home, Print, and Options. Below the button bar, there is a left-hand side that contains three tabs: Content, Search, and Favorites. In the Contents tab, there is an organizational tree that allows you to look at the page names and navigate directly to the page you want. On the right hand side, there is the content window. It displays the header of the page and any navigation buttons relating to the page, such as Previous, Top, and Next. Below the header is the content for the page.

 

On the first run through, the content window was blank. I contacted my software’s maker and they had me make some changes. I’m not using Word, but just as Word can save as a single document or PDF or web page, this software can save as an entire web site, a CHM help file, or a PDF. I had applied something they call a skin to give the entire set of pages the same look and feel as the software it is supporting. These guys told me to remove the skin. According to them, all that would remain is Microsoft’s CHM frame and my pages within it.

 

On the second run through, the content window was there, but the content header (with the buttons that appear next to it) was not visible to NVDA. This told me that most of the problem was in the skin. I contacted the software maker since the skin was from one of their templates.

 

I’m quoting their response: The most likely cause there is a non-scrolling header, where the topic header stays in the same place and the topic content scrolls in a box below it. If the screen reader is old/poorly programmed/dumb then it is quite possible that it wouldn't be able to handle that. End quote.

 

Well, on behalf of everyone here, I was offended. I may not yet have access to NVDA but I know everyone here works hard to help it be a really good product. I’ve gone back to my files and, as a sighted user, I can’t see any header that stays in the same place. I will be contacting them about that and pushing them to nail down this problem and also to help me get a skin that doesn’t make the files invisible to screen readers.

 

However, I also have to be respectful and ask if anyone knows of a CHM file that is completely readable to NVDA? Is any of this problem an issue with the NVDA software? Or, is it a known problem with CHM that users of other screen readers also experience?

 

Thank you for your understanding,

 

Tonea Morrow


Re: Portable version degrading

Brian's Mail list account <bglists@...>
 

I don't think it is in fact. I just think its having issues due to other problems perhaps with the drive or some parameter of it.
Brian

bglists@blueyonder.co.uk
Sent via blueyonder.
Please address personal email to:-
briang1@blueyonder.co.uk, putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Roger Stewart" <paganus2@gmail.com>
To: <nvda@nvda.groups.io>
Sent: Friday, January 19, 2018 3:58 PM
Subject: Re: [nvda] Portable version degrading


The problem with this discussion is my portable version is on an internal hard drive. So why is this degrading?

Nothing else on this drive has any trouble and I've checked, and there's no file system errors nor any fragmenting.


Roger












On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwise there is
the risk of file system corruption. Precisely the same is true for external
hard drives, floppy disks, or any other writeable medium you can temporarily
attach to a computer.

I've never seen a USB thumb drive fall apart, and I think they're considerably
more robust than floppy disks, which is basically what they replaced. You can
also drop them on the floor with a good deal more confidence of them working
afterwards than if you drop an external hard disk.

Yes, they're vulnerable to static electricity; that's why most of them have
plastic caps to put over the contacts or a slider to retract the contacts into
the body.

My experience is that if they're treated reasonably they work very well. If
they're mistreated they'll give as many problems as any other mistreated
storage medium.


Antony.

On Friday 19 January 2018 at 15:17:36, tonea.ctr.morrow@faa.gov wrote:

A few years back, I had a job for three years where people brought me their
files on USB thumb drives. These things are horrible in terms of
long-life. The really do have to be unmounted prior to removing from the
computer or they get corrupted. They physically fall apart easily. And,
the hardware inside seems to be more vulnerable to static electricity data
loss than other portable drives, certainly more vulnerable than most
computers.



I would think that would be the problem.



Tonea



-----Original Message-----

I've noticed over the past couple years that my portable install of nvda
will sometimes degrade or get a bit corrupted over time all by itself
while the installed version is always stable as a rock. Does anyone know
why this is and is there any way to prevent this from happening? I use
the portable copy to test a couple add ons and if the portable version
corrupts, it can make it appear that the add on is defective or has a bug
while it really doesn't. Deleting the portable copy and making a new one
will clear it up. I also notice a few functions of nvda either don't work
at all or nvda gets very sluggish in responsiveness and this all gets back
to normal after a complete flush and remake of the portable version. As I
say, this never has happened at all with my installed copy on the same
computer.





Roger