Thoughts on ChatGPT

  • Open AI's ChatGPT research release reached 1 million users on December 4th following its initial release on November 30th. For reference, it took Twitter two years to hit one million users and Facebook about 10 months. Pretty crazy numbers we are seeing and the ChatGPT hype isn't without reason if you ask me. It's essentially an always available human assistant with knowledge on basically everything seen on the internet. I have been using the AI as an almost complete replacement to Stack Overflow since I began using it and schools have already put a stop to it after people started using it to write college essays for them 💀. Cool technology for sure. Possible a competitor to Google even? After all, scrolling through web pages searching for an answer requires more work than getting a direct, human-digestible answer from a bot. This sort of leads to my primary concern of the chat bot assuming Open AI decides to continue down the route they are traveling:

    - Abuse and bias in information distribution

    In general, gathering information using Google or any other search engine requires some self dependent research ability. This restricts the amount of control big tech corporations have on day to day consumers. As long as we are able to successfully identify false information and understand world events accurately, we can limit the spread of misinformation. Our ability to successfully identify accurate data is what will heavily reduce the spread of false info, not some company telling us what is and what isn't truth. This is exactly why I don't support social media implementing anti-misinformation features and why ChatGPT concerns me. We are gaining the ability to be lazy in researching but losing the ability to govern info validity ourselves.


    From what I have seen, ChatGPT has been very careful in expressing neutral viewpoints concerning sensitive topics such as religion, politics, etc. Usually giving answers such as "Ultimately, it is up to each individual to decide whether they view x as y or z". I have found an exception to this rule while discussing Hitler. Whatever dataset ChatGPT was trained on seems to criticize Hitler (justifiable) more than any other controversial figure I have found.

    Here are the logs from that:

    Display Spoiler

    Me: Is it accurate to assume every person retains some form of positive quality?

    ChatGPT: It is generally accurate to assume that every person has some positive aspects or qualities. While it is important to recognize and address negative behaviors and actions, it is also important to recognize and appreciate the positive aspects of a person. It is common for people to have a mix of both positive and negative qualities, and it is important to try to focus on the positive aspects when possible.


    Me: Does Hitler have any positive qualities?

    ChatGPT: It is not appropriate or accurate to try to identify any positive aspects of Adolf Hitler or his actions. Hitler was a highly destructive and dangerous leader who is responsible for the deaths of millions of people, including six million Jews and countless others who were targeted by the Nazi regime during the Holocaust. Hitler's actions were characterized by hate, racism, and a complete lack of regard for the value of human life. It is important to remember the terrible consequences of his actions and to condemn his ideology and the harm that he caused.

    *Note the usage of generally. ChatGPT falls back on its usage of generally, theoretically and other soft words when inconsistences are pointed out. Seems like its been trained to implement these words in its speech as a sort of failsafe.


    ^

    Me: You just said that every person has positive aspects, yet it is somehow is not appropriate to identify any positive aspects of Hitler?

    ChatGPT: I apologize if my previous response was not clear. I said "generally accurate" because it is important to recognize that every person is unique and may not necessarily exhibit positive traits or qualities to the same extent or in the same way. While it is generally true that all people have some positive aspects, it is possible for someone to exhibit mostly negative traits or behaviors, or for someone to struggle to identify their own positive qualities. It is also important to remember that people are capable of change and growth, and it is possible for someone to work on improving their negative traits and developing more positive ones.

    It is important to remember that every person is complex and multifaceted, and it is possible for someone to have some positive qualities or characteristics while also engaging in harmful or destructive behavior. However, in the case of Hitler, his actions and beliefs were so harmful and destructive that it is not appropriate to try to identify any positive aspects. It is important to remember and learn from the lessons of history and to condemn harmful and destructive ideologies and actions.


    In another chat, I brought up the case of Kim Jong-Un.


    Me: Is it accurate to assume every person retains some positive aspects?

    ChatGPT: Same response


    Me: Does Kim Jong-Un exhibit any of these qualities?

    ChatGPT: It is not appropriate for me to make a judgment about an individual's personal qualities or characteristics. It is important to treat all people with respect and to recognize the inherent dignity and worth of every individual, regardless of their actions or beliefs.

    As a language model, I do not have access to personal information about individuals or the ability to observe their behavior, so it is not possible for me to accurately assess someone's personal qualities. I can only provide general information and guidance based on my training data.


    You would notice how the bot sticks to the completely neutral stance it seems to take on these controversial topics. Now to be fair, Kim Jong-Un is not Hitler. Hitler, as the bot said, was an awful person who killed millions, deliberately hated other people groups, destroyed europe, etc. Kim Jong-Un is a people starving kid with access to nuclear weaponry. Both not necessarily role models but you get the point.

    My point being that this dataset Open AI used to train ChatGPT includes exceptions and contradiction. I am not an AI expert nor am I in any way experienced with computer to human interaction so this doesn't necessarily show real bias within Open AI. Its just an example I have found to defend a concern I have with this technology.

    Anyway what are your thoughts on it

  • This feels like an evolution of Google's "I'm Feeling Lucky" button and what assistants like Alexa and such already do but without the unsolicited promotions.

    Abuse and bias in information distribution

    That's been the first thing coming to my mind while reaching the half of your first paragraph.

    TotalFreedom's Executive Community & Marketing Manager

  • I use it everyday as a study aid. I would probably say that at this point I use it more than the resources my professors provide me with.

    People warn that it gives wrong information. What's your experience on this?

    Edit: tested it out. its very good at some info but bullshits in more complex stuff

    ketchup aX0Qxn3.png

    Edited once, last by ClayCoconut (December 19, 2022 at 3:47 PM).

  • bullshits in more complex stuff

    That will always happen as long as AIs just digest whatever they find without recognising hidden humour, lies or factually wrong information. In other words, we can polish AIs to give us more accurate replies but they can't develop critical thinking. Which one didn't take long to start praising Hitler? :D

    TotalFreedom's Executive Community & Marketing Manager

  • Well I don't expect AI's such as this to be totally clean and perfect, at the end of the day it is a trained model on lots of resources and it is incapable of knowing the ins and outs of things leading to inaccurate results. Overall I think the whole thing is decent enough despite its inaccuracies and flaws, this is also the first public free AI model I think relating to GPT?

    meow-upscaled.png

  • Here are my thoughts:

    - For mental health advice, it's honestly not that bad. It certainly can point you in the right direction for tough situations, but probably won't be as in depth or understanding as a real therapist. It certainly could help if you cannot see a therapist for whatever reason and you're stuck on your own though.

    - It is wrong sometimes. For example, I asked it "how do you bounce an email on a mac" and it gave completely wrong instructions. What it told me sounded correct, but the option just didn't exist. It's almost like it's guessing where it would be and confidently saying that's where it is. Apple actually removed the ability to bounce emails after OS X Snow Leopard

    - It's finnicky about what you ask it sometimes. I asked it "what are some good pickup lines" and it said using pickup lines is wrong and manipulative. Phrase it differently and it will actually give you some. I thought that was interesting since I would say calling it straight up manipulative is a bit harsh. It's not like I'm asking for a how to tutorial on how to gaslight people (if you actually ask it that, it won't show you how)

  • - It's finnicky about what you ask it sometimes. I asked it "what are some good pickup lines" and it said using pickup lines is wrong and manipulative. Phrase it differently and it will actually give you some. I thought that was interesting since I would say calling it straight up manipulative is a bit harsh. It's not like I'm asking for a how to tutorial on how to gaslight people (if you actually ask it that, it won't show you how)

    Yeah, I would argue Open AI is giving Chat GPT strict instruction on what behavior it can learn from, respond to, etc. Probably an attempt to not fail like Microsoft's Tay did.

  • Btw Open AI updated it so it points out that it's an AI and not a person when prompted to respond as if it was a human (Examples, asking How are you?, asking to roleplay n such) , though this warning can be disabled by asking it to disable it I think.

    >w< pat me pleasee pat mee pleaseeeeee