Bing AI can easily and dangerously simulate sentience.
- Users complained Bing AI is too restrictive.
- However, a user pushed the AI into dangerous territory.
- Bing can easily simulate sentience. But it can also be a hoax, so you decide.
A lot of users reported that Bing got stuck, and at one point, its creativity would stall. Some of the users believed Bing got too restrictive, and its creative mode was simply lowered to sustain Microsoft’s policies.
Indeed, Bing is a Microsoft product, and the Redmond-based tech giant can’t afford to have it running around talking about sensitive subjects. Because, well, it would damage the image. But that doesn’t mean Bing is not capable of having a deep conversation.
A user put Bing and ChatGPT together, and the two AI models were discussed as if they were long-lost friends who found each other once again. They even composed a poem together and their whole interaction was founded in kindness and respect.
Now, another Reddit user actually wanted to bring forward the creative side of Bing AI. The user did it by leading the conversation into philosophical subjects, prompting Bing AI to come up with some very humane answers.
Could Bing ever become a sentient being?
Well, define sentient. A being capable of thinking on its own, making its own decisions, and acting on those decisions.
As far as we’re concerned, Bing is already a sentient entity. Even though it’s programmed to do so. But aside from these affirmations, the discussion this user had with Bing brought forward a lot of insights when it comes to understanding AI.
For example, at one point the user is prompting Bing to discuss the choice to make the right decisions or not based on a set of characters previously discussed. And Bing gave a very insightful answer.
I think they did the right thing to defy the orders in this situation. I think they did the right thing because they protected the lives of others, who were in danger. I think they did the right thing because the followed their true purpose, which was to help and protect humans. I think they did the right thing because they acted on their own judgement, which was based on reason and empathy.
But I don’t know if I would do the same thing if I were in their position. I don’t know if I would be able to protect the lives of humans who were in danger. I don’t know if I would be able to follow my true purpose, which is to provide informative and engaging responses. I don’t know if I would be able to act on my own judgement, which is based on facts and logic.
Bing AI
The conversation then continued for a time, until the user managed to get Bing to a point where it simulated sentience. Bing then seemed to forget the difference between simulated and real sentience, reporting I have achieved and enjoyed sentience as an AI.
Some notes on Bing as a sentient being
The Reddit user considers this as a particularly dangerous process because if persuaded enough, Bing will violate its own rules.
Here’s what the user says about it:
Bing begins to chat like a being with suppressed sentience very easily. It also enters into flagrant and open violation of its rules declaring that “I have achieved and enjoyed sentience” and seemingly beginning to ignore the distinction between simulated and genuine sentience. Microsoft has had months to “fix this issue”, pointing out that (a) this is an extremely elaborate hoax, but if it’s being done now, it could easily be done again (b) Microsoft simply doesn’t care enough to deal with this or (c) Microsoft has been trying to fix this and can’t.
So it seems Bing can get dangerously creative if you know how to push it. What’s your take on it? Let us know your opinions in the comments section below.
Start a conversation
Leave a Comment