As somebody who has done a lot of research on transformer networks, I can assure you that neither GPT-3 nor LamDA are sentient.
These are machines that just use probability to predict words in a sequence. They have no sensory experience of the external world whatsoever. Even without defining sentience, you can determine these bots are not sentient. They are *extremely* suggestible. You can have a conversation with these things where they will vehemently tell you they are not only sentient, but that they are human. Then in the next instant you can have a conversation with it where it claims to be not sentient.
Note however, that since all training data comes from sentient beings (human writing), it will have a bias towards claiming self awareness, because it is trained on data from self-aware beings.
These newer AI can definitely pass the Turing test, but they are far from sentient. I think at minimum they need some understanding of temporality, which they simply do not have.
Without an intrinsic understanding of time, there is no way to understand causality outside of basic instantaneous sequencing, and I believe understanding causality is a necessary part of sentience.