Re: Portable version degrading
Hi,toggle quoted messageShow quoted text
Defragmentation won't extend drive life. Instead, it makes it a bit faster
for drives to return data you need. For hard drives, once the required
region gets the attention of a read/write head, the head will read
continuously unless told to stop. Ideally, the head can (and should) do this
in one sitting, but sometimes it'll go "mad" if trying to locate fragmented
information all over the drive, particularly if you've edited files and the
drive decided to store the new bits somewhere else.
As for automatic defragmentation, it is controlled by a setting in Optimize
As for accessing related data: depends. If the data you need (or a group of
them) is located next to each other, then yes, otherwise the drive will be
searched to locate fragments.
As for calling me a "professor": I don't deserve this title (I don't have a
doctorate in NVDA, let alone computer science or communication studies). Nor
it isn't the first time someone gave me this nickname: many years ago, I was
called "professor" as I seem to know everything about HumanWare BrailleNote,
with some BrailleNote users commenting that I am a Humanware staff when I
was just a high school senior. In terms of NVDA, let's just say that this is
just a small talk from a certified NVDA Expert; perhaps I should record a
tutorial set devoted to NVDA internals.
From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of
Sent: Friday, January 19, 2018 12:34 PM
Subject: Re: [nvda] Portable version degrading
So, first, I will be reading this lecture several time going forward.
However, you do a wonderful job of making geeky things reasonably
comprehensible. I think in part for me is your slight leaning towards
anthropomorphizing file systems, drives and even those little electrons.
One question has to do with defragmenting. Does the process help to increase
the accessing of related pieces of data? Second, does the process extend
drive life to any extent? Last, I promise, I thought I read at some point
that systems had made user invoked defragmentation unnecessary. Be that
Rick the old English teacher who loves Inanimate objects that come to life
Dr. Rick Ely
TVI, Vision Consultant
451 Rocky Hill Road
Florence, MA 01062
From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of Joseph
Sent: Friday, January 19, 2018 12:19 PM
Subject: Re: [nvda] Portable version degrading
It'll depend on what type of drive it is. If it's a traditional hard drive,
it'll degrade as data moves around, creating the need for defragmentation.
This is especially the case when data is repeatedly written and the file
system is asked to find new locations to hold the constantly changing data.
In case of solid-state drives, it'll degrade if the same region is written
repeatedly, as flash memory has limited endurance when it comes to data
reads and writes.
In case of Roger's issue: a possible contributing factor is constant add-on
updates. He uses an add-on that is updated on a regular basis, putting
strain on part of the drive where the add-on bits are stored. Thus, some
drive sectors are repeatedly bombarded with new information, and one way
operating systems will do in this case is move the new data somewhere else
on the drive, potentially fragmenting bits of files (I'll explain in a
moment). Thus one solution is to not test all add-on updates, but that's a
bit risky as Roger is one of the key testers for this add-on I'm talking
Regarding fragmentation and what not: the following is a bit geeky but I
believe you should know about how some parts of a file system (an in
extension, operating systems) works, because I believe it'll help folks
better understand what might be going on:
Storage devices encountered in the wild are typically organized into many
parts, typically into blocks of fixed-length units called "sectors". A
sector is smallest unit of information that the storage device can present
to the outside world, as in how much data can be held on a storage device.
For example, when you store a small document on a hard disk drive (HDD) and
when you wish to open it in Notepad, Windows will ask a module that's in
charge of organizing and interpreting data on a drive (called a file system)
to locate the sector where the document (or magnets or flash cells that
constitute the document data) is stored and bring it out to you. To you, all
you see is the path to the document, but the file system will ask the drive
controller (a small computer inside hard disks and other storage devices) to
fetch data in a particular sector or region. Depending on what kind of
storage medium you're dealing with, reading from disks may involve waiting
for a platter with desired sector to come to the attention of a read/write
head (a thin magnetic sensor used to detect or make changes to magnetic
fields) or peering inside windows and extracting electrons trapped within.
This last sentence is a vivid description of how hard disks and solid-state
drives really work behind the scenes, respectively.
But storage devices are not just meant for reading things for your
enjoyment. Without means of storing new things, it becomes useless.
Depending on the medium you've got, when you save something to a storage
device, the file system in charge of the device will ask the drive
controller to either find a spot on a disk filled with magnets and change
some magnets, or apply heat pressure to dislodge all cells on a block, erase
the block, add new things, and fill the empty block with modified data
(including old bits). You can imagine how tedious this can get, but as far
as your work is concerned, it is safe and sound.
Now imagine you wish to read and write repeatedly on a storage device. The
file system will repeatedly ask the drive hardware to fetch data from
specific regions, and will look for new locations to store changes. On a
hard drive, because there are limited number of heads and it'll take a while
for desired magnetic region to come to attention of one, read speed is slow,
hence increased latency (latency refers to how long you have to wait for
something to happen). When it comes to saving things to HDD's, all the drive
needs to do is tell the read/write head to change some magnets wherever it
wishes, hence data overriding is possible and easy. But operating systems
(rather, file systems) are smarter than that, as we'll see below.
In case of solid-state drives, reading data is simple as looking up the
address (or sector) where the electrons comprising the data you want is
saved (akin to walking down a street grid), so no need to wait for a sensor
to wait for something to happen. This is the reason why solid-state drives
appear to respond fast when reading something. On the other hand, writing or
injecting electrons is very slow because the drive needs to erase the entire
block before writing new data. In other words, just changing a letter in a
document and saving it to an SSD involves a lot of work, hence SSD's are
slower when it comes to writing new things, but because of the underlying
technology in use, it is way faster than hard disks.
As hinted above, file systems are smarter than drive controllers to some
extent. If data is written to a drive, the drive controller will process
whatever it comes along its path. But file systems won't let drive
controllers get away with that: file systems such as NTFS (New Technology
File System) will schedule data writes so it'll have minimal impact on the
lifespan of a storage device. For hard disks, it'll try its best to tell the
drive to store file data in consecutive locations in one big batch, but that
doesn't always work. For SSD's, the file system will ask the drive to
storage new information in different cells so all regions can be used
equally (at least for storing new information; this is called ware
leveling). One way to speed things up is asking the drive to reorganize data
so file fragments can be found in consecutive sectors or trim deleted
regions so fresh information can be written to more blocks (for HDD's and
SSD's, respectively), and this operation itself is tedious and produce bad
results if not done correctly and carefully.
I do understand the above explanation is a bit geeky, but I believe you need
to know some things about how things work. It is also a personal exercise to
refresh my memory on certain computer science topics (I majored in it not
long ago, and my interests were mostly hardware and operating systems, hence
I was sort of naturally drawn to screen reader internals and how it
interacts with system software).
From: firstname.lastname@example.org [mailto:email@example.com] On Behalf Of Roger
Sent: Friday, January 19, 2018 7:58 AM
Subject: Re: [nvda] Portable version degrading
The problem with this discussion is my portable version is on an internal
hard drive. So why is this degrading?
Nothing else on this drive has any trouble and I've checked, and there's no
file system errors nor any fragmenting.
On 1/19/2018 8:28 AM, Antony Stone wrote:
USB drives do need to be unmounted before removing them, otherwiseis
the risk of file system corruption. Precisely the same is true forexternal
hard drives, floppy disks, or any other writeable medium you cantemporarily
attach to a computer.considerably
more robust than floppy disks, which is basically what they replaced.can
also drop them on the floor with a good deal more confidence of themworking
afterwards than if you drop an external hard disk.have
plastic caps to put over the contacts or a slider to retract theinto
they're mistreated they'll give as many problems as any othertheir
datafiles on USB thumb drives. These things are horrible in terms of
workloss than other portable drives, certainly more vulnerable than most
backat all or nvda gets very sluggish in responsiveness and this all gets
Ito normal after a complete flush and remake of the portable version.
say, this never has happened at all with my installed copy on the