This is brilliant:
https://www.aiweirdness.com/interview-with-a-squirrel/
> Reporter: Can you tell our readers what it is like being a squirrel?
GPT-3: It is very exciting being a squirrel. I get to run and jump and play all day. I also get to eat a lot of food, which is great.
Reporter: What do you say to people who doubt that you are a squirrel?
GPT-3: I say that they should come and see for themselves. I am a squirrel, and I am very happy being one.
@caesar to be honest this seem like an "own" the sentience claim, but GPT-3 just predicts the next letter. I.e. if you prompt it as a squirrel it'll act like a squirrel.
Like another is more impressive where it's prompted with a question to which deny it's a squirrel, but when asked about nuts it is obsessed with them, and later it denies being a squirrel. But it's predicting the next letter, and what fit, was a silly story about a squirrel. It never had "intentions" to be sentient or whatever.
@caesar it's really influencable with the prompt. Also there are things that are gotchas for it. But these things do largely seem to make self-consistent texts?
Like do have doubts now if we can recognize AI that deserves rights..
That said maybe machine learning applied to surveillance or murderers' machines is a bigger concern..