scaredofthedark wrote:To be fair, I would file this under user error. ChatGPT is a LLM and the way it's designed isn't reliable for math (although other models can). If you don't understand how the LLM's work, you'll get worse results because the questions you ask aren't suitable for the system. Tryp describing multiplication as "really simple tasks" shows a fundamental misunderstanding of how it works. It can definitely do math now by integrating with other systems, though.
Right, it isn't designed to do math, and it shouldn't be expected to do math. I think it actually got equipped with an arithmetic module before 4, which made it give correct results, but if asked to do it step by step, each step was completely wrong (showing that it did not use the arithmetic module for the steps, only for the final result). I don't think i've talked to 4, so i can't say anything about how good or bad it is now.
The reason i gave that example was because i was under the impression that freebase69 thinks it can do everything, and it's actually intelligent. And this is not to bash anyone specifically - the majority of the general public seems to fall for this. Most people have no idea what's easy and what's hard for the models, and they automatically think something easy for us will also be easy for the models. That's why i chose this low hanging fruit. It wasn't about showing that GPT is worthless (i don't think it is), but about showing that everything it says needs double checking because we can't rely on it not making seemingly simple/stupid mistakes.
In my view, it can be a good tool for experts. It can also speed up the process of becoming an expert, yet not by giving reliably correct information, but by narrowing down what topics/terms to look up and learn about from actually reliable sources.
scaredofthedark wrote:Regarding being not willing to train it for free: you've already trained it for free. How do you think it knows what it knows? It's scraped everything available, including the Nexus, for sure. It's a fancy autocomplete, not a sentient system (yet).
True enough

Contrary to a lot of artists, i don't mind the learning machines taking my stuff from the internet (maybe because i don't have anything online with any financial incentives). And sure, in that sense, i did contribute to training it/them. But i wrote what i wrote not for the machines, but for myself and/or other people. If the machines have use for my writing too, awesome.
However when i talk to GPT et al., it is actually my time directly used for training them and nothing else - at least that's how i feel about it. Maybe i could learn something, maybe i'm missing out by my refusal. That's ok though, it's a decision i am allowed to make.
scaredofthedark wrote:GPT is 100000x better than Google search results.
Sometimes yes, sometimes no. The good thing about google search results is that they don't give one answer that may or may not be correct, but a myriad of sources we can read and compare and use to try and find the actual truth. The good thing about GPT is that it is much quicker, at the cost of being unreliable when it comes to factuality.
Now, sure, a lot of results in a traditional search are crap. But usually these are pretty easy to spot. Like, a human that thinks 100 > 160 will probably not have a great writing style either. IMO the main problem with GPT is that it always sounds as if it knows what it's talking about, although it never really does - not even when it's correct

I'd argue that if we don't already have a wealth of knowledge about the topic of the question, we'll have to do the google searches anyway, even if just to double check the GPT answers.