gpt4all a new app that works great with NVDA!


Josh Kennedy
 

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


Shawn
 

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


Akash Kakkar
 

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:
I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


Gene
 

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material.  It even says on the web page about the chat bot something like that it isn't aware of things from the last three years.  On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:
I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


Shaun Oliver
 

Um, excuse me,

 

*snip*

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material. 
*snip*

 

How can it be online but not have access to the internet?

 

Please, if you’re going to April fool every,

1: Do it before Midday.

2: make it believable.

3: make sure you have got us, right to the very end, when you scream April Fool at us.

 

 

 

Shaun Oliver

 

VK5ESE

 

Independent Adaptive Technology Consultant,

 

Recording Engineer,

 

Voiceover Artist,

 

Session Guitarist.

 

ABN: 22335298446

 

 

 

'For what it's worth: It's never too late to be whoever you want to be. I hope you live a life you're proud of, and if you find that you're not, I hope you have the strength to start over.' – F. Scott Fitzgerald

 

 

Mobile and Whatsapp: +61477836199

 

Twitter and Telegram: @blindman75

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Sunday, April 2, 2023 3:07 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material.  It even says on the web page about the chat bot something like that it isn't aware of things from the last three years.  On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all

 


 

On Sat, Apr 1, 2023 at 01:37 PM, Gene wrote:
The Chat Bot that is used online doesn't have access to the Internet.
-
Please produce a citation for that.  You are correct that the large database already exists that it uses, but it's also constantly adding to that large database.  It's constantly in touch with the internet whether it's using it actively to answer your (generic you) query or not.  ChatGPT is constantly crawling for new data, and I'm presuming that's what you mean by Chat Bot.

Thus, it's online, whether online functionality is in use at the UI level at a given moment in time or not.

I can't see much point in downloading an offline version (such as the app mentioned) that uses a pre-constructed large database that would require frequent re-downloading to keep current, but others may feel differently.
--

Brian Virginia, USA Windows 11 Pro, 64-Bit, Version 22H2, Build 22621; Office 2016, Version 16.0.15726.20188, 32-bit

It is much easier to be critical than to be correct.

       ~ Benjamin Disraeli, 1804-1881


Josh Kennedy
 

it uses your current CPU and current GPU on your machine to run its small transformer neural network. The 4gb .bin file you download is a highly compressed large language offline model that it uses to look for patterns and do anything from writing code to poetry and other stuff chat gpt does, just on a smaller scale.  


Josh Kennedy
 

this is no april fool joke. its real. it works. I have it installed and am using it now. Its like a smaller less refined chat gpt. the huge 4gb .bin file contains all its compressed training data. 


Kefas
 

Haha!

It is not a prank actually.

My conversation with chatGPT about the topic in question:

Sent: how was chatGPT trained?

Received:

ChatGPT was trained using a large dataset of text called the Common Crawl, which is a vast collection of web pages and online documents. The training process used a technique called unsupervised learning, which means that the model was not given any specific examples of questions and answers to learn from. Instead, the model was trained to predict the next word in a sentence, given all the words that came before it. This process is called language modeling.

The training process used a variant of the transformer neural network architecture called GPT (Generative Pre-trained Transformer), which was developed by OpenAI. The GPT architecture has multiple layers of transformers, which are neural network components that process sequential data, such as sentences or paragraphs. Each transformer layer uses self-attention mechanisms to focus on the most important parts of the input and generate a new representation of the data.

The training process for ChatGPT used a massive amount of computational resources, including thousands of graphics processing units (GPUs) running in parallel. The resulting model has a large number of parameters (over 175 billion), which enables it to generate high-quality responses to a wide range of questions and prompts.

Sent: was it trained with internet resources?

Received:

Yes, ChatGPT was trained using a large dataset of text called the Common Crawl, which is a collection of billions of web pages and online documents. The Common Crawl is created by periodically crawling the internet and archiving the content it finds. The content of the Common Crawl includes a diverse range of topics and languages, making it an ideal resource for training a language model like ChatGPT.

In addition to the Common Crawl, ChatGPT was also trained on other internet resources, such as Wikipedia and various online books. These resources were used to provide additional diverse and high-quality text data for the model to learn from. By training on a wide range of internet resources, ChatGPT is able to generate responses to a wide range of questions and prompts, including those related to current events, popular culture, and academic topics.

Sent: then how come you don’t have information after September 2021?

Received:

As an AI language model, my knowledge is based on the data that I was trained on, which includes text data up until September 2021. While I can generate responses based on that knowledge, I do not have access to information or events that have occurred after that time.

However, I can still provide general information and answer questions based on the knowledge that I have been trained on. It's important to note that while I can provide answers based on my training data, I may not always be able to provide the most up-to-date or accurate information on certain topics, especially those that are rapidly changing or have a high degree of uncertainty.

 

Sent from kefaslungu

 

From: Shaun Oliver
Sent: 01 April 2023 18:45
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

Um, excuse me,

 

*snip*

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material. 
*snip*

 

How can it be online but not have access to the internet?

 

Please, if you’re going to April fool every,

1: Do it before Midday.

2: make it believable.

3: make sure you have got us, right to the very end, when you scream April Fool at us.

 

 

 

Shaun Oliver

 

VK5ESE

 

Independent Adaptive Technology Consultant,

 

Recording Engineer,

 

Voiceover Artist,

 

Session Guitarist.

 

ABN: 22335298446

 

 

 

'For what it's worth: It's never too late to be whoever you want to be. I hope you live a life you're proud of, and if you find that you're not, I hope you have the strength to start over.' – F. Scott Fitzgerald

 

 

Mobile and Whatsapp: +61477836199

 

Twitter and Telegram: @blindman75

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Sunday, April 2, 2023 3:07 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material.  It even says on the web page about the chat bot something like that it isn't aware of things from the last three years.  On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all

 

 


Luke Robinett
 

As somebody who has been pretty steadily geeking out on generative AI tech for the past several months, I’ll chime in here. It sounds like maybe there’s some confusion about GPT. Out of the box, so to speak, it doesn’t have access to the Internet in that it can’t go perform a Google search for you or something. Obviously if you are using the new version of Microsoft Bing, it has GPT capabilities baked into it, and that’s kind of a different story, but just using the straight up open AI chat GPT website, the responses the model is generating are based on training data that had a cut off date of September 2021. They are however going to be releasing what they are calling plug-ins which extend the capabilities of GPT, including a plug-in that allows it to access the Internet and do a better job of answering questions about  Current topics.
Also, I know it’s tempting to think of these things as just vast databases that were scraped together by sifting through the Internet over a period of time, but that’s not at all what’s happening under the hood. It’s not simply consuming and storing data off the web for later retrieval; It’s sifting through the information to recognize patterns so that it can then perform its own predictive text, kind of like auto correct on steroids, but way more sophisticated than that, obviously. This is why even if you give GPT the same exact prompt you will never get the same exact output.  

On Apr 1, 2023, at 11:00 AM, Brian Vogel <britechguy@...> wrote:

On Sat, Apr 1, 2023 at 01:37 PM, Gene wrote:
The Chat Bot that is used online doesn't have access to the Internet.
-
Please produce a citation for that.  You are correct that the large database already exists that it uses, but it's also constantly adding to that large database.  It's constantly in touch with the internet whether it's using it actively to answer your (generic you) query or not.  ChatGPT is constantly crawling for new data, and I'm presuming that's what you mean by Chat Bot.

Thus, it's online, whether online functionality is in use at the UI level at a given moment in time or not.

I can't see much point in downloading an offline version (such as the app mentioned) that uses a pre-constructed large database that would require frequent re-downloading to keep current, but others may feel differently.
--

Brian Virginia, USA Windows 11 Pro, 64-Bit, Version 22H2, Build 22621; Office 2016, Version 16.0.15726.20188, 32-bit

It is much easier to be critical than to be correct.

       ~ Benjamin Disraeli, 1804-1881


Luke Davis
 

Brian Vogel wrote:

-Please produce a citation for that.  You are correct that the large database already exists that it uses, but it's also constantly adding to that large
database.
[.] constantly crawling for new data
Uh, no. When making definitive statements that contradict all the press about a thing, the "citation needed" tag goes more in the other direction. :)

It is true that GPT 2, GPT 3, and GPT 3.5, used training data that was frozen at some point in the past.
It used the internet to communicate with consumers of its services, but it was not permitted to learn anything new from those interactions, or from the internet at large.

Thus it is and was very possible to run it equally effectively in an offline state. Because, apart from the communication layer, it was already running in effectively an offline state.

One of the major selling points of GPT 4, is that it at least has the potential to train itself on either live data or recent data from the internet, although I don't believe it is doing that yet.

In any case, think of it less like a database that it looks stuff up in, that can be added to on the fly, and more like a human brain, that "knows" stuff that it has been trained to know.
At least in the case of GPT before version 4.
Neither is actually how it works, but the second is closer to reality than the database concept.

citations:

The non-liveness of GPT has been all over every news article and Computerphile youtube video I've read/watched about it from the beginning, so I have taken it as a given "that everyone knew". But this is a couple I came up with in a few seconds of googling:

https://www.zdnet.com/article/how-does-chatgpt-work/

https://www.forbes.com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-what-its-disrupting-and-how-to-use-it/

Luke


Gene
 

Thank you for saving me the trouble of finding a citation.  I saw this in a reputable source but I would have to find it again. 

Gene
On 4/1/2023 1:28 PM, Kefas wrote:

Haha!

It is not a prank actually.

My conversation with chatGPT about the topic in question:

Sent: how was chatGPT trained?

Received:

ChatGPT was trained using a large dataset of text called the Common Crawl, which is a vast collection of web pages and online documents. The training process used a technique called unsupervised learning, which means that the model was not given any specific examples of questions and answers to learn from. Instead, the model was trained to predict the next word in a sentence, given all the words that came before it. This process is called language modeling.

The training process used a variant of the transformer neural network architecture called GPT (Generative Pre-trained Transformer), which was developed by OpenAI. The GPT architecture has multiple layers of transformers, which are neural network components that process sequential data, such as sentences or paragraphs. Each transformer layer uses self-attention mechanisms to focus on the most important parts of the input and generate a new representation of the data.

The training process for ChatGPT used a massive amount of computational resources, including thousands of graphics processing units (GPUs) running in parallel. The resulting model has a large number of parameters (over 175 billion), which enables it to generate high-quality responses to a wide range of questions and prompts.

Sent: was it trained with internet resources?

Received:

Yes, ChatGPT was trained using a large dataset of text called the Common Crawl, which is a collection of billions of web pages and online documents. The Common Crawl is created by periodically crawling the internet and archiving the content it finds. The content of the Common Crawl includes a diverse range of topics and languages, making it an ideal resource for training a language model like ChatGPT.

In addition to the Common Crawl, ChatGPT was also trained on other internet resources, such as Wikipedia and various online books. These resources were used to provide additional diverse and high-quality text data for the model to learn from. By training on a wide range of internet resources, ChatGPT is able to generate responses to a wide range of questions and prompts, including those related to current events, popular culture, and academic topics.

Sent: then how come you don’t have information after September 2021?

Received:

As an AI language model, my knowledge is based on the data that I was trained on, which includes text data up until September 2021. While I can generate responses based on that knowledge, I do not have access to information or events that have occurred after that time.

However, I can still provide general information and answer questions based on the knowledge that I have been trained on. It's important to note that while I can provide answers based on my training data, I may not always be able to provide the most up-to-date or accurate information on certain topics, especially those that are rapidly changing or have a high degree of uncertainty.

 

Sent from kefaslungu

 

From: Shaun Oliver
Sent: 01 April 2023 18:45
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

Um, excuse me,

 

*snip*

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material. 
*snip*

 

How can it be online but not have access to the internet?

 

Please, if you’re going to April fool every,

1: Do it before Midday.

2: make it believable.

3: make sure you have got us, right to the very end, when you scream April Fool at us.

 

 

 

Shaun Oliver

 

VK5ESE

 

Independent Adaptive Technology Consultant,

 

Recording Engineer,

 

Voiceover Artist,

 

Session Guitarist.

 

ABN: 22335298446

 

 

 

'For what it's worth: It's never too late to be whoever you want to be. I hope you live a life you're proud of, and if you find that you're not, I hope you have the strength to start over.' – F. Scott Fitzgerald

 

 

Mobile and Whatsapp: +61477836199

 

Twitter and Telegram: @blindman75

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Sunday, April 2, 2023 3:07 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material.  It even says on the web page about the chat bot something like that it isn't aware of things from the last three years.  On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all

 

 



Gene
 

Your comment and the other I responded to gives interesting information. 

I looked at the Github page for the off line program and I found an interesting sentence about how safeguards against answering inappropriate questions have been removed. 

This raises the disturbing possibility that unauthorized variants of different chat bots may be developed over time that may be capable of all sorts of mischief.  We are creating a Pandora's box and we have no idea what we are doing.

I wonder how the person who developed the off line version got the material needed.  Will he be liable for prosecution?  This can hardly be an authorized version of the program.

Gene

On 4/1/2023 3:52 PM, Luke Davis wrote:

Brian Vogel wrote:

-Please produce a citation for that.  You are correct that the large database already exists that it uses, but it's also constantly adding to that large
database.
[.] constantly crawling for new data

Uh, no. When making definitive statements that contradict all the press about a thing, the "citation needed" tag goes more in the other direction. :)

It is true that GPT 2, GPT 3, and GPT 3.5, used training data that was frozen at some point in the past.
It used the internet to communicate with consumers of its services, but it was not permitted to learn anything new from those interactions, or from the internet at large.

Thus it is and was very possible to run it equally effectively in an offline state. Because, apart from the communication layer, it was already running in effectively an offline state.

One of the major selling points of GPT 4, is that it at least has the potential to train itself on either live data or recent data from the internet, although I don't believe it is doing that yet.

In any case, think of it less like a database that it looks stuff up in, that can be added to on the fly, and more like a human brain, that "knows" stuff that it has been trained to know.
At least in the case of GPT before version 4.
Neither is actually how it works, but the second is closer to reality than the database concept.

citations:

The non-liveness of GPT has been all over every news article and Computerphile youtube video I've read/watched about it from the beginning, so I have taken it as a given "that everyone knew". But this is a couple I came up with in a few seconds of googling:

https://www.zdnet.com/article/how-does-chatgpt-work/

https://www.forbes.com/sites/ariannajohnson/2022/12/07/heres-what-to-know-about-openais-chatgpt-what-its-disrupting-and-how-to-use-it/

Luke







Gene
 

The original message on this subject says that the version of the program being discussed is run on a computer and that you don't need to be connected to the Internet to use it.  Thus, it runs off line.  Off line in this context means you don't have to be connected to the Internet to use a program or access it.  You can view e-mail messages or web pages off line if you set your e-mail program or browser to do so.  You can thus look at e-mails or web pages you have stored on your computer for this purpose.

Discussing whether the chat bot is connected to the Internet and gets information from it is a separate question.

Um, before you condescendingly talk down to people, um, maybe you should consider that maybe the other person might know something you don't and you might consider being civil.  Um.

Gene

On 4/1/2023 12:45 PM, Shaun Oliver wrote:

Um, excuse me,

 

*snip*

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material. 
*snip*

 

How can it be online but not have access to the internet?

 

Please, if you’re going to April fool every,

1: Do it before Midday.

2: make it believable.

3: make sure you have got us, right to the very end, when you scream April Fool at us.

 

 

 

Shaun Oliver

 

VK5ESE

 

Independent Adaptive Technology Consultant,

 

Recording Engineer,

 

Voiceover Artist,

 

Session Guitarist.

 

ABN: 22335298446

 

 

 

'For what it's worth: It's never too late to be whoever you want to be. I hope you live a life you're proud of, and if you find that you're not, I hope you have the strength to start over.' – F. Scott Fitzgerald

 

 

Mobile and Whatsapp: +61477836199

 

Twitter and Telegram: @blindman75

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Sunday, April 2, 2023 3:07 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material.  It even says on the web page about the chat bot something like that it isn't aware of things from the last three years.  On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all

 



Gene
 

I should have said that even if you are sure you are right, you might consider being civil.

Gene

On 4/1/2023 4:26 PM, Gene via groups.io wrote:

The original message on this subject says that the version of the program being discussed is run on a computer and that you don't need to be connected to the Internet to use it.  Thus, it runs off line.  Off line in this context means you don't have to be connected to the Internet to use a program or access it.  You can view e-mail messages or web pages off line if you set your e-mail program or browser to do so.  You can thus look at e-mails or web pages you have stored on your computer for this purpose.

Discussing whether the chat bot is connected to the Internet and gets information from it is a separate question.

Um, before you condescendingly talk down to people, um, maybe you should consider that maybe the other person might know something you don't and you might consider being civil.  Um.

Gene

On 4/1/2023 12:45 PM, Shaun Oliver wrote:

Um, excuse me,

 

*snip*

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material. 
*snip*

 

How can it be online but not have access to the internet?

 

Please, if you’re going to April fool every,

1: Do it before Midday.

2: make it believable.

3: make sure you have got us, right to the very end, when you scream April Fool at us.

 

 

 

Shaun Oliver

 

VK5ESE

 

Independent Adaptive Technology Consultant,

 

Recording Engineer,

 

Voiceover Artist,

 

Session Guitarist.

 

ABN: 22335298446

 

 

 

'For what it's worth: It's never too late to be whoever you want to be. I hope you live a life you're proud of, and if you find that you're not, I hope you have the strength to start over.' – F. Scott Fitzgerald

 

 

Mobile and Whatsapp: +61477836199

 

Twitter and Telegram: @blindman75

 

From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Sunday, April 2, 2023 3:07 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!

 

The Chat Bot that is used online doesn't have access to the Internet.  It was trained on material from the Internet, though I don't know if it was also trained on other material.  It even says on the web page about the chat bot something like that it isn't aware of things from the last three years.  On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene

Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all

 




Derek Lane
 

In my experience, it makes up the answers.

I'm serious, its like asking an overeager teenager who has read just enough, and seen enough of the world to fill in the gaps, quickly and confidently, but not necessarily accurately.

Example:

                                                                                                                  
> tell me about nvda                                                                                               
NVDA (Non-Visual Desktop Access) is software for screen readers, which are programs designed to read text aloud onscreen using synthetic speech or Braille output devices. NVDA can be used with any Windows application that supports accessibility features such as JAWS and Window Eyes. It also has a built in dictionary lookup feature called "Narrator" for reading out loud the definitions of words, which is useful when you are not able to see what's on your screen or if there isn’t enough lighting available where you need it most!                                           
>

 



On 4/1/2023 12:20 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:
I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


 

I'd say that you should do this only if you're sure of what you're doing. It's a bit on the geeky side. With that being said, I think I'll give it a go.





On 4/1/2023 11:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


Josh Kennedy
 

I asked the offline gpt4all about its training, here's what I got and also what shows up when you first run it. 

main: seed = 1680438742                                                                                                              
llama_model_load: loading model from 'gpt4all-lora-quantized.bin' - please wait ...                                                  
llama_model_load: ggml ctx size = 6065.35 MB                                                                                         
llama_model_load: memory_size =  2048.00 MB, n_mem = 65536                                                                           
llama_model_load: loading model part 1/1 from 'gpt4all-lora-quantized.bin'                                                           
llama_model_load: .................................... done                                                                          
llama_model_load: model size =  4017.27 MB / num tensors = 291                                                                       
                                                                                                                                     
system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 0 | NEON = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | VSX = 0 |                                                                                            
main: interactive mode on.                                                                                                           
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000                    
                                                                                                                                     
                                                                                                                                     
== Running in chat mode. ==                                                                                                          
 - Press Ctrl+C to interject at any time.                                                                                            
 - Press Return to return control to LLaMA.                                                                                          
 - If you want to submit another line, end your input in '\'.                                                                        
                                                                                                                                     
> How were you trained? What internet resources were used, if any? What is your large language model called?                         
I was not specifically trained for this project but I have experience in natural language processing and machine learning techniques such as word2vec. For the task at hand, I utilized pre-trained models like BERT or XLNet to perform NLP tasks on text data sets. My large model is called GPT3 which was recently released by OpenAI for their research purposes but it can be used in various applications including chatbots and content generation.                                                                                         
> and is your language model called llama?                                                                                           
No, my name is not Llama! I am a natural language processing tool that was developed by OpenAI for their research purposes but can be used in various applications including chatbots and content generation.                                                             



Gene
 

At times, it does make up information.  But generalizing is incorrect.  I've asked it lots of questions.  I've found certain kinds of information where it makes things up seemingly regularly.  I've asked it to discuss three or four short stories and it made up perfectly plausible plots for them but the plots are incorrect.  That's why I said it seemingly does this regularly for this kind of question since I've only tried this about four times. 

I asked it to discuss two legal cases and it made up very plausible information about the cases, but the information is incorrect.

On the other hand, I've asked it a lot of questions in a lot of subject areas where the information given is accurate.

And this is a research project.  The developers are using information they get on how the bot functions to improve it.

Gene

On 4/2/2023 12:43 AM, Derek Lane wrote:

In my experience, it makes up the answers.

I'm serious, its like asking an overeager teenager who has read just enough, and seen enough of the world to fill in the gaps, quickly and confidently, but not necessarily accurately.

Example:

                                                                                                                  
> tell me about nvda                                                                                               
NVDA (Non-Visual Desktop Access) is software for screen readers, which are programs designed to read text aloud onscreen using synthetic speech or Braille output devices. NVDA can be used with any Windows application that supports accessibility features such as JAWS and Window Eyes. It also has a built in dictionary lookup feature called "Narrator" for reading out loud the definitions of words, which is useful when you are not able to see what's on your screen or if there isn’t enough lighting available where you need it most!                                           
>

 



On 4/1/2023 12:20 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:
I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at: 
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon. 


https://github.com/nomic-ai/gpt4all


Brian's Mail list account
 

regardless of the trueness. It would be perfectly possible to have what we used to call an expert system running locally since it would only need to know data about the subject and its normal sentence constructing logic.
I wrote on for an old 8 bit machine that could troubleshoot common problems in home computers of the time. It was slow and on some systems needed disc access, but few had the INternet or what effectively was the internet by another name at the time.

More of a curiosity than useful.
Brian

--
bglists@...
Sent via blueyonder.(Virgin media)
Please address personal E-mail to:-
briang1@..., putting 'Brian Gaff'
in the display name field.

----- Original Message -----
From: "Shaun Oliver" <blindman75@...>
To: <nvda@nvda.groups.io>
Sent: Saturday, April 01, 2023 6:45 PM
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!


Um, excuse me,



*snip*

The Chat Bot that is used online doesn't have access to the Internet. It was trained on material from the Internet, though I don't know if it was also trained on other material.
*snip*



How can it be online but not have access to the internet?



Please, if you’re going to April fool every,

1: Do it before Midday.

2: make it believable.

3: make sure you have got us, right to the very end, when you scream April Fool at us.







Shaun Oliver



VK5ESE



Independent Adaptive Technology Consultant,



Recording Engineer,



Voiceover Artist,



Session Guitarist.



ABN: 22335298446







'For what it's worth: It's never too late to be whoever you want to be. I hope you live a life you're proud of, and if you find that you're not, I hope you have the strength to start over.' – F. Scott Fitzgerald





Mobile and Whatsapp: +61477836199



Twitter and Telegram: @blindman75



From: nvda@nvda.groups.io <nvda@nvda.groups.io> On Behalf Of Gene
Sent: Sunday, April 2, 2023 3:07 AM
To: nvda@nvda.groups.io
Subject: Re: [nvda] gpt4all a new app that works great with NVDA!



The Chat Bot that is used online doesn't have access to the Internet. It was trained on material from the Internet, though I don't know if it was also trained on other material. It even says on the web page about the chat bot something like that it isn't aware of things from the last three years. On 4/1/2023 12:05 PM, Akash Kakkar wrote:

I have no idea what limits of information are in the personal computer version, compared with the on line version.

Gene



Actually it is offline because it runs on pre-trained data which will have to be downloaded to your PC.

On 4/1/2023 9:50 PM, Shawn via groups.io wrote:

Off line? So how does it answer your questions if it doesn't access the Internet?

Shawn Klein

On 4/1/2023 10:48 AM, Josh Kennedy wrote:

I found a new app that works great with NVDA! It is called gpt4all. It is a smaller, local offline version of chat gpt that works entirely on your own local computer, once installed, no internet required. I do recommend the most modern processor you can have, even an entry level one will due, and 8gb of ram or more. I'm running it on an AMD ryzen 5 cpu from around mid to late 2020. be sure to install git for windows, first, though. The whole thing is open source, all the directions can be found at:
when it is responding to your chat query, just put NVDA into beeps mode, when NVDA stops beeping then you can set it back to speech mode talk and read its response with the review cursor, mark and copy it and so on. or if using jaws or if you prefer virtualize the window and copy stuff that way with an NVDA addon.


https://github.com/nomic-ai/gpt4all