CHATPRIVACYDONATELOGINREGISTER
DMT-Nexus
FAQWIKIHEALTH & SAFETYARTATTITUDEACTIVE TOPICS
PREV12
Chat GPT to answer your extraction questions. Options
 
downwardsfromzero
#21 Posted : 11/4/2023 1:43:20 PM

Boundary condition

ModeratorChemical expert

Posts: 8617
Joined: 30-Aug-2008
Last visit: 07-Nov-2024
Location: square root of minus one
Homo Trypens wrote:
Phenethylamine23 wrote:
...
So sometimes have to feed it the right information, and check it's results.

Btw, that's what i mean when i say i'm not willing to train their thing for free, or even pay for it. OpenAI can pay me if they want me to educate their brainless child.

Spot on, 1001% Thumbs up

That's why I only ever try to break it with pointless nonsense and dadaist gibberish Wink




“There is a way of manipulating matter and energy so as to produce what modern scientists call 'a field of force'. The field acts on the observer and puts him in a privileged position vis-à-vis the universe. From this position he has access to the realities which are ordinarily hidden from us by time and space, matter and energy. This is what we call the Great Work."
― Jacques Bergier, quoting Fulcanelli
 

STS is a community for people interested in growing, preserving and researching botanical species, particularly those with remarkable therapeutic and/or psychoactive properties.
 
Jacubey
#22 Posted : 11/4/2023 4:38:16 PM
DMT-Nexus member

Senior Member

Posts: 134
Joined: 14-Jan-2022
Last visit: 04-Mar-2024
A couple things here.

1. ChatGPT 3.5 is pretty awful. I wouldn't use it for anything, personally
2. ChatGPT 4 is pretty good, but you still shouldn't expect it to do correct analysis consistently


There is a way to make it reliably perform these calculations however, and you do that by offloading the processing to a classical computation system. ChatGPT's plugin api provides the facilities to do this, and I've managed to create a few plugins of my own with great success. For example, you provide chatgpt with a kind of manifest that (roughly) tells it "this external system can calculate the total number of molecules in a sample if you provide a chemical composition and a mass". Next time you ask a question like those posited above, ChatGPT will then recognize that you've asked a question about molecule count and, instead of calculating that itself, it'll defer to the other system. There are various ways you could then build on this to reaffirm to the asker that indeed the external system was used, so that the user has confidence in the result.

You could use this same kind of system to build something that can reliably provide accurate dosing information.

I've been wanting to do something like this using binding db to easily find out what things bind to what receptors with what affinity in a conversational manner.

Keep in mind this is Not training chatgpt. This is integrating chatgpt with external tools. And given that most of this stuff probably exists in the wild in some form, it'd be a fairly minimal effort to make this stuff work.
 
scaredofthedark
#23 Posted : 11/26/2023 4:47:02 PM

DMT-Nexus member


Posts: 152
Joined: 24-Feb-2023
Last visit: 07-Apr-2024
downwardsfromzero wrote:
ChatGPT 3.5 really messed up the bufotenine MW calculation there Laughing
That's the calculation for cyanic acid, HOCN.

Thanks for underlining the warning against relying on this system for critical technical matters.


To be fair, I would file this under user error. ChatGPT is a LLM and the way it's designed isn't reliable for math (although other models can). If you don't understand how the LLM's work, you'll get worse results because the questions you ask aren't suitable for the system. Tryp describing multiplication as "really simple tasks" shows a fundamental misunderstanding of how it works. It can definitely do math now by integrating with other systems, though.

Regarding being not willing to train it for free: you've already trained it for free. How do you think it knows what it knows? It's scraped everything available, including the Nexus, for sure. It's a fancy autocomplete, not a sentient system (yet).

Aren't these answers correct?

Quote:
DMT (N,N-Dimethyltryptamine) in its alkaloid form behaves differently in an aqueous solution depending on the pH level:

1. After Basifying an Aqueous Solution Containing DMT Salts:

- When DMT salts are in an aqueous solution and the solution is basified (i.e., the pH is raised, typically using a substance like sodium hydroxide or lye), the DMT is converted from its salt form into its freebase form.
- In its freebase form, DMT is not soluble in water. However, it does not typically precipitate in the sense of forming a heavy sediment at the bottom immediately. Instead, it often remains suspended in the solution or may adhere to the sides of the container.

2. Behavior of DMT in the Solution:
- Does Not Precipitate Immediately: DMT in its freebase form, being non-polar, does not readily dissolve in water (a polar solvent) and thus does not precipitate out as a heavy sediment immediately upon basification.
- Suspension: The freebase DMT may stay suspended in the aqueous solution as tiny particles, making the solution appear cloudy or murky.
- Extraction Required: To effectively separate the DMT, a non-polar solvent (like naphtha) is typically added. The DMT freebase, being non-polar, will preferentially dissolve in the non-polar solvent. This phase can then be separated from the aqueous phase, and upon evaporation of the non-polar solvent, DMT crystals are obtained.

In summary, DMT freebase doesn't immediately form a heavy sediment at the bottom of an aqueous solution upon basification. Instead, it remains suspended in the solution and requires extraction into a non-polar solvent to be effectively separated and collected.


GPT is 100000x better than Google search results.
 
DancinDog
#24 Posted : 11/27/2023 4:18:02 PM

DMT-Nexus member


Posts: 106
Joined: 20-Apr-2019
Last visit: 26-Nov-2024
downwardsfromzero wrote:
Well, I guess that counts as attempting to start doing your homework, but I'm not convinced it counts as actually exercising your brain. Case in point, I don't think you'd have posted the following if you'd thought about it for even half a second:
freebase69 wrote:
So less than the boiling point of water?

Yes, the boiling points of DMT salts, which range from approximately 160 to 175 degrees Celsius, are lower than the boiling point of water, which is 100 degrees Celsius (212 degrees Fahrenheit) at standard atmospheric pressure.
This really underlines the caveat about the accuracy of information that ChatGPT spews out. It really is a probability-based prediction of the most likely successive word and has NO UNDERSTANDING of what it's talking about.

"Quick, cheap, good - choose two..."




Exactly. I couldn't bother reading past that. How could it possibly reply with such an obviously incorrect answer? What grade do kids normally learn the boiling point of water? That's barely above tying your shoes!
 
scaredofthedark
#25 Posted : 11/28/2023 4:39:02 PM

DMT-Nexus member


Posts: 152
Joined: 24-Feb-2023
Last visit: 07-Apr-2024
What is the issue with that statement? The boiling point of water at standard atmospheric pressure is 100°C (rounded from 99.97°, that is), is it not? I don't think people ITT are considering how GPT works. It can only "know" what it's been trained on and if the material it's trained on has conflicting (or poorly explained) information, that's what it will give back out to you. If you ask it about pitbulls being dangerous, it'll downplay the issue due to how much disinformation is spread online about them, despite the data being overwhelming clear. If you search for threads on BP of DMT on Nexus, there's definitely some disagreement and confusion. It stands to reason that LLMs won't understand what we don't.
 
fractals4life
#26 Posted : 11/28/2023 4:48:17 PM

DMT-Nexus member


Posts: 66
Joined: 13-Sep-2022
Last visit: 13-Apr-2024
Location: United Kingdom
read again ... is 160 to 170 lower than 100?

I totally agree with you BTW, that illustrates the lack of any "knowledge" about the world on these things. They are just too beguiling Smile
 
scaredofthedark
#27 Posted : 11/29/2023 1:08:35 AM

DMT-Nexus member


Posts: 152
Joined: 24-Feb-2023
Last visit: 07-Apr-2024
Ah, GPT-4 doesn't get confused to the user's wording like 3.5 does:

The correct boiling point of freebase DMT is significantly higher than water. Freebase DMT has a boiling point around 160°C (320°F), which is much higher than the boiling point of water at 100°C (212°F) under standard atmospheric pressure.

It's important to note that DMT is sensitive to heat and can decompose at high temperatures, so in practical applications, it is typically vaporized (for inhalation) at temperatures somewhat lower than its boiling point to avoid degradation. This distinction is crucial in practices that involve vaporizing or melting DMT, where precise temperature control is necessary to prevent decomposition.


But yeah, anything math-wise would often confuse it, especially if the user state things authoritatively. You could usually get a correction on 3.5 with, "Are you sure about that?" regardless if their answer was correct or not. GPT-4 doesn't really do that. It doesn't really seem like anybody had issue with the bulk of its output, though.
 
Homo Trypens
#28 Posted : 11/30/2023 8:24:10 PM

DMT-Nexus member

Welcoming committeeSenior Member

Posts: 560
Joined: 12-Aug-2018
Last visit: 08-Nov-2024
Location: Earth surface
scaredofthedark wrote:
To be fair, I would file this under user error. ChatGPT is a LLM and the way it's designed isn't reliable for math (although other models can). If you don't understand how the LLM's work, you'll get worse results because the questions you ask aren't suitable for the system. Tryp describing multiplication as "really simple tasks" shows a fundamental misunderstanding of how it works. It can definitely do math now by integrating with other systems, though.

Right, it isn't designed to do math, and it shouldn't be expected to do math. I think it actually got equipped with an arithmetic module before 4, which made it give correct results, but if asked to do it step by step, each step was completely wrong (showing that it did not use the arithmetic module for the steps, only for the final result). I don't think i've talked to 4, so i can't say anything about how good or bad it is now.

The reason i gave that example was because i was under the impression that freebase69 thinks it can do everything, and it's actually intelligent. And this is not to bash anyone specifically - the majority of the general public seems to fall for this. Most people have no idea what's easy and what's hard for the models, and they automatically think something easy for us will also be easy for the models. That's why i chose this low hanging fruit. It wasn't about showing that GPT is worthless (i don't think it is), but about showing that everything it says needs double checking because we can't rely on it not making seemingly simple/stupid mistakes.

In my view, it can be a good tool for experts. It can also speed up the process of becoming an expert, yet not by giving reliably correct information, but by narrowing down what topics/terms to look up and learn about from actually reliable sources.


scaredofthedark wrote:
Regarding being not willing to train it for free: you've already trained it for free. How do you think it knows what it knows? It's scraped everything available, including the Nexus, for sure. It's a fancy autocomplete, not a sentient system (yet).

True enough Smile

Contrary to a lot of artists, i don't mind the learning machines taking my stuff from the internet (maybe because i don't have anything online with any financial incentives). And sure, in that sense, i did contribute to training it/them. But i wrote what i wrote not for the machines, but for myself and/or other people. If the machines have use for my writing too, awesome.

However when i talk to GPT et al., it is actually my time directly used for training them and nothing else - at least that's how i feel about it. Maybe i could learn something, maybe i'm missing out by my refusal. That's ok though, it's a decision i am allowed to make.


scaredofthedark wrote:
GPT is 100000x better than Google search results.

Sometimes yes, sometimes no. The good thing about google search results is that they don't give one answer that may or may not be correct, but a myriad of sources we can read and compare and use to try and find the actual truth. The good thing about GPT is that it is much quicker, at the cost of being unreliable when it comes to factuality.

Now, sure, a lot of results in a traditional search are crap. But usually these are pretty easy to spot. Like, a human that thinks 100 > 160 will probably not have a great writing style either. IMO the main problem with GPT is that it always sounds as if it knows what it's talking about, although it never really does - not even when it's correct Very happy

I'd argue that if we don't already have a wealth of knowledge about the topic of the question, we'll have to do the google searches anyway, even if just to double check the GPT answers.
 
scaredofthedark
#29 Posted : 11/30/2023 9:14:26 PM

DMT-Nexus member


Posts: 152
Joined: 24-Feb-2023
Last visit: 07-Apr-2024
Fair enough. I agree with the majority of what you wrote, for sure.

I find GPT to be much better than Google results personally, but I also think it's because I very often search for extremely specific information. Despite this, Google seems to ignore specific search operators and delivers extremely generic results when it didn't before. If I had to guess, they're overvaluing their LSI (latent semantic indexing) process because those results are optimal for the general searcher. I spent years as an SEO so I'm not sure I'd be the general searcher, heh. I often find myself using site:reddit.com to get general opinions on things more and more often, tbh.

So it'd probably be fair to say, "sometimes yes, sometimes no." I find it to always be better than Google for the specific types of queries I'm talking about. It can also use Bing now to link to articles supporting the position, which I find very helpful. My Google results give me their AI explanation by default anyway so I get both AI/regular SERPs from a search. I also have a Chat-GPT extension in the sidebar for their take as well, haha.

But yes, it should be used as an assistant to help you, not something you never question or fact-check with other sources. I do still find that the better you phrase your questions, the better results you get. The very vague answers are usually from vague, open-ended questions. You can create your own GPT parameters now to specify certain answers as a default. For example, I have one that only takes images and transcribes the text on them. You can definitely set one up to have specific chemistry knowledge, too, I'm sure. It's a great tool, in my opinion. We live in a wild time.
 
PREV12
 
Users browsing this forum
Guest

DMT-Nexus theme created by The Traveler
This page was generated in 0.090 seconds.