Thoughts on ChatGPT

  • Open AI's ChatGPT research release reached 1 million users on December 4th following its initial release on November 30th. For reference, it took Twitter two years to hit one million users and Facebook about 10 months. Pretty crazy numbers we are seeing and the ChatGPT hype isn't without reason if you ask me. It's essentially an always available human assistant with knowledge on basically everything seen on the internet. I have been using the AI as an almost complete replacement to Stack Overflow since I began using it and schools have already put a stop to it after people started using it to write college essays for them 💀. Cool technology for sure. Possible a competitor to Google even? After all, scrolling through web pages searching for an answer requires more work than getting a direct, human-digestible answer from a bot. This sort of leads to my primary concern of the chat bot assuming Open AI decides to continue down the route they are traveling:

    - Abuse and bias in information distribution

    In general, gathering information using Google or any other search engine requires some self dependent research ability. This restricts the amount of control big tech corporations have on day to day consumers. As long as we are able to successfully identify false information and understand world events accurately, we can limit the spread of misinformation. Our ability to successfully identify accurate data is what will heavily reduce the spread of false info, not some company telling us what is and what isn't truth. This is exactly why I don't support social media implementing anti-misinformation features and why ChatGPT concerns me. We are gaining the ability to be lazy in researching but losing the ability to govern info validity ourselves.


    From what I have seen, ChatGPT has been very careful in expressing neutral viewpoints concerning sensitive topics such as religion, politics, etc. Usually giving answers such as "Ultimately, it is up to each individual to decide whether they view x as y or z". I have found an exception to this rule while discussing Hitler. Whatever dataset ChatGPT was trained on seems to criticize Hitler (justifiable) more than any other controversial figure I have found.

    Here are the logs from that:


    Anyway what are your thoughts on it

  • This feels like an evolution of Google's "I'm Feeling Lucky" button and what assistants like Alexa and such already do but without the unsolicited promotions.

    Abuse and bias in information distribution

    That's been the first thing coming to my mind while reaching the half of your first paragraph.

    TotalFreedom's Executive Community & Marketing Manager

    • Official Post

    Helping me in college so I love it

    you mean, I'M helping you in college.


    paypal dont lie.


    fuck though, i need to actually try this at somepoint. sold on the whole straight answer from another bot shit and i keep hearing about it

  • I use it everyday as a study aid. I would probably say that at this point I use it more than the resources my professors provide me with.

    People warn that it gives wrong information. What's your experience on this?


    Edit: tested it out. its very good at some info but bullshits in more complex stuff

  • I think it's absolutely insane the things that it can do, but at the same time I think it's a nice piece of work and can help with a lot of things.

    Hub Moderator

    Former Admin

    Rhythm Game Enthusiast

    Owner @ [REDACTED]

  • bullshits in more complex stuff

    That will always happen as long as AIs just digest whatever they find without recognising hidden humour, lies or factually wrong information. In other words, we can polish AIs to give us more accurate replies but they can't develop critical thinking. Which one didn't take long to start praising Hitler? :D

    TotalFreedom's Executive Community & Marketing Manager

  • Well I don't expect AI's such as this to be totally clean and perfect, at the end of the day it is a trained model on lots of resources and it is incapable of knowing the ins and outs of things leading to inaccurate results. Overall I think the whole thing is decent enough despite its inaccuracies and flaws, this is also the first public free AI model I think relating to GPT?

  • Here are my thoughts:

    - For mental health advice, it's honestly not that bad. It certainly can point you in the right direction for tough situations, but probably won't be as in depth or understanding as a real therapist. It certainly could help if you cannot see a therapist for whatever reason and you're stuck on your own though.

    - It is wrong sometimes. For example, I asked it "how do you bounce an email on a mac" and it gave completely wrong instructions. What it told me sounded correct, but the option just didn't exist. It's almost like it's guessing where it would be and confidently saying that's where it is. Apple actually removed the ability to bounce emails after OS X Snow Leopard

    - It's finnicky about what you ask it sometimes. I asked it "what are some good pickup lines" and it said using pickup lines is wrong and manipulative. Phrase it differently and it will actually give you some. I thought that was interesting since I would say calling it straight up manipulative is a bit harsh. It's not like I'm asking for a how to tutorial on how to gaslight people (if you actually ask it that, it won't show you how)

  • - It's finnicky about what you ask it sometimes. I asked it "what are some good pickup lines" and it said using pickup lines is wrong and manipulative. Phrase it differently and it will actually give you some. I thought that was interesting since I would say calling it straight up manipulative is a bit harsh. It's not like I'm asking for a how to tutorial on how to gaslight people (if you actually ask it that, it won't show you how)

    Yeah, I would argue Open AI is giving Chat GPT strict instruction on what behavior it can learn from, respond to, etc. Probably an attempt to not fail like Microsoft's Tay did.

  • Btw Open AI updated it so it points out that it's an AI and not a person when prompted to respond as if it was a human (Examples, asking How are you?, asking to roleplay n such) , though this warning can be disabled by asking it to disable it I think.

    >w< pat me pleasee pat mee pleaseeeeee

Participate now!

Don’t have an account yet? Register yourself now and be a part of our community!