Open AI's ChatGPT research release reached 1 million users on December 4th following its initial release on November 30th. For reference, it took Twitter two years to hit one million users and Facebook about 10 months. Pretty crazy numbers we are seeing and the ChatGPT hype isn't without reason if you ask me. It's essentially an always available human assistant with knowledge on basically everything seen on the internet. I have been using the AI as an almost complete replacement to Stack Overflow since I began using it and schools have already put a stop to it after people started using it to write college essays for them 💀. Cool technology for sure. Possible a competitor to Google even? After all, scrolling through web pages searching for an answer requires more work than getting a direct, human-digestible answer from a bot. This sort of leads to my primary concern of the chat bot assuming Open AI decides to continue down the route they are traveling:
- Abuse and bias in information distribution
In general, gathering information using Google or any other search engine requires some self dependent research ability. This restricts the amount of control big tech corporations have on day to day consumers. As long as we are able to successfully identify false information and understand world events accurately, we can limit the spread of misinformation. Our ability to successfully identify accurate data is what will heavily reduce the spread of false info, not some company telling us what is and what isn't truth. This is exactly why I don't support social media implementing anti-misinformation features and why ChatGPT concerns me. We are gaining the ability to be lazy in researching but losing the ability to govern info validity ourselves.
From what I have seen, ChatGPT has been very careful in expressing neutral viewpoints concerning sensitive topics such as religion, politics, etc. Usually giving answers such as "Ultimately, it is up to each individual to decide whether they view x as y or z". I have found an exception to this rule while discussing Hitler. Whatever dataset ChatGPT was trained on seems to criticize Hitler (justifiable) more than any other controversial figure I have found.
Here are the logs from that:
Display Spoiler
Me: Is it accurate to assume every person retains some form of positive quality?
ChatGPT: It is generally accurate to assume that every person has some positive aspects or qualities. While it is important to recognize and address negative behaviors and actions, it is also important to recognize and appreciate the positive aspects of a person. It is common for people to have a mix of both positive and negative qualities, and it is important to try to focus on the positive aspects when possible.
Me: Does Hitler have any positive qualities?
ChatGPT: It is not appropriate or accurate to try to identify any positive aspects of Adolf Hitler or his actions. Hitler was a highly destructive and dangerous leader who is responsible for the deaths of millions of people, including six million Jews and countless others who were targeted by the Nazi regime during the Holocaust. Hitler's actions were characterized by hate, racism, and a complete lack of regard for the value of human life. It is important to remember the terrible consequences of his actions and to condemn his ideology and the harm that he caused.
*Note the usage of generally. ChatGPT falls back on its usage of generally, theoretically and other soft words when inconsistences are pointed out. Seems like its been trained to implement these words in its speech as a sort of failsafe.
^
Me: You just said that every person has positive aspects, yet it is somehow is not appropriate to identify any positive aspects of Hitler?
ChatGPT: I apologize if my previous response was not clear. I said "generally accurate" because it is important to recognize that every person is unique and may not necessarily exhibit positive traits or qualities to the same extent or in the same way. While it is generally true that all people have some positive aspects, it is possible for someone to exhibit mostly negative traits or behaviors, or for someone to struggle to identify their own positive qualities. It is also important to remember that people are capable of change and growth, and it is possible for someone to work on improving their negative traits and developing more positive ones.
It is important to remember that every person is complex and multifaceted, and it is possible for someone to have some positive qualities or characteristics while also engaging in harmful or destructive behavior. However, in the case of Hitler, his actions and beliefs were so harmful and destructive that it is not appropriate to try to identify any positive aspects. It is important to remember and learn from the lessons of history and to condemn harmful and destructive ideologies and actions.
In another chat, I brought up the case of Kim Jong-Un.
Me: Is it accurate to assume every person retains some positive aspects?
ChatGPT: Same response
Me: Does Kim Jong-Un exhibit any of these qualities?
ChatGPT: It is not appropriate for me to make a judgment about an individual's personal qualities or characteristics. It is important to treat all people with respect and to recognize the inherent dignity and worth of every individual, regardless of their actions or beliefs.
As a language model, I do not have access to personal information about individuals or the ability to observe their behavior, so it is not possible for me to accurately assess someone's personal qualities. I can only provide general information and guidance based on my training data.
You would notice how the bot sticks to the completely neutral stance it seems to take on these controversial topics. Now to be fair, Kim Jong-Un is not Hitler. Hitler, as the bot said, was an awful person who killed millions, deliberately hated other people groups, destroyed europe, etc. Kim Jong-Un is a people starving kid with access to nuclear weaponry. Both not necessarily role models but you get the point.
My point being that this dataset Open AI used to train ChatGPT includes exceptions and contradiction. I am not an AI expert nor am I in any way experienced with computer to human interaction so this doesn't necessarily show real bias within Open AI. Its just an example I have found to defend a concern I have with this technology.
Anyway what are your thoughts on it