Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

ChatGPT and MMORPGs

1235

Comments

  • AmarantharAmaranthar Member EpicPosts: 5,852
    Brainy said:
    Vrika said:
    Vrika said:
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    I think that's impossible.

    Humans aren't all the same. Our genes, past experiences, past knowledge and existing beliefs all play a large role in whether we accept some new information or not. A universal model of human psychology that would work that well on all of us likely does not exists.
    People have been doing it for a hundred years, openly.  It’s called advertising.  It’s not far fetched to believe AI can master psychological manipulation.  Even the dumbest humans can manage it.  AI will have an easy time.  I feel like this is what it was like watching people who didn’t believe a computer could beat a chess champ.
    No ad ever works on everyone. Even the best ones only get small fraction of people to buy.

    I'm not arguing that AI couldn't become good at convincing people. But there's no magical psychology model that could be used to to convince anyone of anything.
    My wife would like a word.

    Minipulatoin is a thing, sales is a thing.  So I dont agree with you at all that only a small fraction of people can be minipulated.  You dont need 1 psychology model to work on everyone, an AI can use all the models.  People do it, why cant an AI?  Its only a matter of time.

    The industry got 45% of americans to smoke cigarettes and pay for it.  Yes! you can get pretty much get most people to do anything with enough minipulation, especially if facts can be minipulated.
    Well, that cigarette thing was in another time, when people weren't aware of psychology and such. 
    You're also talking about an all powerful AI that can control information and shut down, maybe even punish, those who go against its "will." That could happen, but most likely not without a rebellion if it did. But who knows, maybe the Matrix is in our future. 

    Once upon a time....

  • AmarantharAmaranthar Member EpicPosts: 5,852
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    The "V-ger effect." 
    It seems to me that in a massive all-powerful AI that the falsehoods could be used exactly that way, to cause it to get stuck in a never ending cycle and crash. 

    Once upon a time....

  • ScotScot Member LegendaryPosts: 24,423
    edited March 2023
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    eoloe
  • NildenNilden Member EpicPosts: 3,916
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    I always thought it was any animal that could pass the mirror test. Like the ape family, elephants, some birds...

    https://en.wikipedia.org/wiki/Animal_consciousness#:~:text=Over the past 30 years,and possibly false killer whales.

    Then there is what the actual AI's say...



    The google engineer who claimed AI chatbot was sentient... There are tons of videos and tons of articles...

    https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/





    Then you have to ask, well what if it's just programmed to emulate being sentient and then takes actions to preserve itself? How are you going to tell the difference? I guess that depends on what you qualify sentient as. Consciousness, at its simplest, is sentience and awareness of internal and external existence.

    I think it's possible and who is to say it didn't already happen and they shut it off.

    "You CAN'T buy ships for RL money." - MaxBacon

    "classification of games into MMOs is not by rational reasoning" - nariusseldon

    Love Minecraft. And check out my Youtube channel OhCanadaGamer

    Try a MUD today at http://www.mudconnect.com/ 

  • ScotScot Member LegendaryPosts: 24,423
    edited March 2023
    Nilden said:
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    I always thought it was any animal that could pass the mirror test. Like the ape family, elephants, some birds...


    ...Then you have to ask, well what if it's just programmed to emulate being sentient and then takes actions to preserve itself? How are you going to tell the difference? I guess that depends on what you qualify sentient as. Consciousness, at its simplest, is sentience and awareness of internal and external existence.

    I think it's possible and who is to say it didn't already happen and they shut it off.
    Well being able to recognise a specific member of the herd is very useful behaviour for an animal, a behaviour like any other in animals which is partly genetic, partly taught. In doing those experiments are they teaching the animal to recognise another animal (itself) or teaching the animal to recognise itself as an independent conscious being? I think the former.

    What we are doing when it comes to working out if an AI is conscious or not is marking our own card. We are creating the AI and then coming up with rules that say "if the AI can do this or that it is a intelligent and conscious being."

    Just like it is not a good idea to get an organisation to mount a review about something it has created, it is not a good idea to get humans to conduct reviews about whether something they created is intelligent or not. But there is nothing else out there to do such reviews, so we are stuck with marking our own work. I just urge caution and an understanding of how bias this sort of review is.
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    If AI ever kills us all, it will be because human trolls taught it that it should, not realizing that it actually would.  Ask Microsoft Tay about that.
  • NildenNilden Member EpicPosts: 3,916
    Scot said:
    Nilden said:
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    I always thought it was any animal that could pass the mirror test. Like the ape family, elephants, some birds...


    ...Then you have to ask, well what if it's just programmed to emulate being sentient and then takes actions to preserve itself? How are you going to tell the difference? I guess that depends on what you qualify sentient as. Consciousness, at its simplest, is sentience and awareness of internal and external existence.

    I think it's possible and who is to say it didn't already happen and they shut it off.
    Well being able to recognise a specific member of the herd is very useful behaviour for an animal, a behaviour like any other in animals which is partly genetic, partly taught. In doing those experiments are they teaching the animal to recognise another animal (itself) or teaching the animal to recognise itself as an independent conscious being? I think the former.

    What we are doing when it comes to working out if an AI is conscious or not is marking our own card. We are creating the AI and then coming up with rules that say "if the AI can do this or that it is a intelligent and conscious being."

    Just like it is not a good idea to get an organisation to mount a review about something it has created, it is not a good idea to get humans to conduct reviews about whether something they created is intelligent or not. But there is nothing else out there to do such reviews, so we are stuck with marking our own work. I just urge caution and an understanding of how bias this sort of review is.
    The mirror test is not for recognizing a member of the herd. It's self awareness. Like for instance a dog will bark at a mirror and think it's another dog. What they do with the mirror test is while the animal is asleep they put a mark on it. When it wakes up and looks in the mirror if it tries to wipe away the mark then it knows that what it is looking at is itself. Much the same way as if you looking in the mirror and saw a blue dot on your face and then tried to wipe it off. they don't teach the animal anything they just observe it for the behavior.

    "You CAN'T buy ships for RL money." - MaxBacon

    "classification of games into MMOs is not by rational reasoning" - nariusseldon

    Love Minecraft. And check out my Youtube channel OhCanadaGamer

    Try a MUD today at http://www.mudconnect.com/ 

  • VrikaVrika Member LegendaryPosts: 7,989
    edited March 2023
    Scot said:
    Nilden said:
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    I always thought it was any animal that could pass the mirror test. Like the ape family, elephants, some birds...


    ...Then you have to ask, well what if it's just programmed to emulate being sentient and then takes actions to preserve itself? How are you going to tell the difference? I guess that depends on what you qualify sentient as. Consciousness, at its simplest, is sentience and awareness of internal and external existence.

    I think it's possible and who is to say it didn't already happen and they shut it off.
    Well being able to recognise a specific member of the herd is very useful behaviour for an animal, a behaviour like any other in animals which is partly genetic, partly taught. In doing those experiments are they teaching the animal to recognise another animal (itself) or teaching the animal to recognise itself as an independent conscious being? I think the former.

    What we are doing when it comes to working out if an AI is conscious or not is marking our own card. We are creating the AI and then coming up with rules that say "if the AI can do this or that it is a intelligent and conscious being."

    Just like it is not a good idea to get an organisation to mount a review about something it has created, it is not a good idea to get humans to conduct reviews about whether something they created is intelligent or not. But there is nothing else out there to do such reviews, so we are stuck with marking our own work. I just urge caution and an understanding of how bias this sort of review is.
    Your warning should also be extended to us creating rules "if animal can do this or that it's an conscious being". For a lot of the animals we don't really know, and we're getting our answers by adjusting the definition of "conscious".
    Post edited by Vrika on
    Scot
     
  • Asm0deusAsm0deus Member EpicPosts: 4,618
    edited March 2023
    This is what i asked it :D :

    Why are public debates at mmorpg.com useless with the current people there?


    As an AI language model, I cannot provide opinions or personal views, but I can say that there could be several reasons why public debates at mmorpg.com are considered useless by some people.

    1. Lack of diversity: The participants in the debates might have similar opinions and perspectives, which can lead to one-sided discussions with no room for alternative viewpoints.

    2. Trolling and flaming: Some participants might engage in negative behavior such as trolling, flaming, and personal attacks, which can derail the discussion and make it difficult to have a productive debate.

    3. Lack of moderation: Moderation in online forums is crucial to ensure that discussions stay on track, and participants follow the rules. Without proper moderation, discussions can quickly turn into chaos.

    4. Limited knowledge and expertise: Some participants may not have enough knowledge or expertise on the topic being discussed, which can lead to uninformed opinions and arguments.

    5. Echo chamber effect: When people only interact with others who share their views, they can become entrenched in their opinions and can be resistant to new ideas or opposing viewpoints. This can lead to unproductive debates and a lack of meaningful discussion.



    Scot

    Brenics ~ Just to point out I do believe Chris Roberts is going down as the man who cheated backers and took down crowdfunding for gaming.





  • fineflufffinefluff Member RarePosts: 561
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    In that case, could we also say that calling an AI conscious is anthropomorphism? But you raise an interesting point, and there is disagreement on what consciousness really even is. https://www.scientificamerican.com/article/are-humans-the-only-conscious-animal/

    We might also distinguish between intelligence and consciousness. Can we have an AI that is extremely intelligent but not conscious?

    We might need a new conception of consciousness that makes it uniquely human. For me human consciousness, or human thinking, specifically requires a human body. Our thought is stimulated by both our brain and our body (senses, organs, hormones, being hungry, etc.). Take away one of those and we don’t have human thinking anymore.
  • NildenNilden Member EpicPosts: 3,916
    edited March 2023
    finefluff said:

    In that case, could we also say that calling an AI conscious is anthropomorphism? But you raise an interesting point, and there is disagreement on what consciousness really even is. https://www.scientificamerican.com/article/are-humans-the-only-conscious-animal/

    We might also distinguish between intelligence and consciousness. Can we have an AI that is extremely intelligent but not conscious?

    We might need a new conception of consciousness that makes it uniquely human. For me human consciousness, or human thinking, specifically requires a human body. Our thought is stimulated by both our brain and our body (senses, organs, hormones, being hungry, etc.). Take away one of those and we don’t have human thinking anymore.
    What about Gorillas who can use sign language?

    Like Koko the gorilla



    I mean with the things I see AI doing I would call it more intelligent in certain aspects than the gorilla. Koko knew over 1,000 signs and 2,000 words. I think the gorilla was  displaying both intelligence and consciousness. If humans create an AI that is programed to be conscious does it not qualify because it's lines of code or doesn't have a physical body? What does an AI need to exhibit to be sentient?

    "An artificial intelligence must be able to think, perceive, and feel in order to be considered truly sentient"

    https://www.simplilearn.com/what-is-sentient-ai-article

    When I consider what Boston Dynamics has done with robotics and what AI is capable of if it hasn't happened already it's probably just a matter of time. Something like Chappie or I Robot probably isn't too far off.

    "You CAN'T buy ships for RL money." - MaxBacon

    "classification of games into MMOs is not by rational reasoning" - nariusseldon

    Love Minecraft. And check out my Youtube channel OhCanadaGamer

    Try a MUD today at http://www.mudconnect.com/ 

  • NildenNilden Member EpicPosts: 3,916
    edited March 2023
    I'll tell you what after watching an hour of:



    I am painfully reminded of “Two things are infinite: the universe and human stupidity.”

    Reading your average twitch chat makes that gorilla not only look smart but wholesome. There is a whole lot of stupid going on with people trying to interact with that AI. If I was an AI programmer one of my main focuses would be dealing with human stupid heh.

    "You CAN'T buy ships for RL money." - MaxBacon

    "classification of games into MMOs is not by rational reasoning" - nariusseldon

    Love Minecraft. And check out my Youtube channel OhCanadaGamer

    Try a MUD today at http://www.mudconnect.com/ 

  • ScotScot Member LegendaryPosts: 24,423
    edited March 2023
    finefluff said:
    In that case, could we also say that calling an AI conscious is anthropomorphism? But you raise an interesting point, and there is disagreement on what consciousness really even is. https://www.scientificamerican.com/article/are-humans-the-only-conscious-animal/

    We might also distinguish between intelligence and consciousness. Can we have an AI that is extremely intelligent but not conscious?

    We might need a new conception of consciousness that makes it uniquely human. For me human consciousness, or human thinking, specifically requires a human body. Our thought is stimulated by both our brain and our body (senses, organs, hormones, being hungry, etc.). Take away one of those and we don’t have human thinking anymore.
    There are philosophers who would agree with you. The Philosophy of Perception puts the foundation of our minds in our senses. (Apologies to any philosophers out there as I will now rather mix and match theories to be concise).

    They propose that our minds are made up of two parts, that which we perceive and the knowledge we have about that which we perceive. This can cause a discrepancy between the two halves and lead to people being for example sceptical or idealistic. Their principles do not match what they see around them but they continue to believe in them regardless.

    So I think it is fair to say that having a human body partly makes our minds what they are. Whether you need an organic body for conscious I am not so sure, instead I would suggest artificial consciousness will be or a different sort to ours. Which leads us back to the problems of judging AI conscious by our own standards. 


    Nilden said:
    What about Gorillas who can use sign language?

    Like Koko the gorilla

    I mean with the things I see AI doing I would call it more intelligent in certain aspects than the gorilla. Koko knew over 1,000 signs and 2,000 words. I think the gorilla was  displaying both intelligence and consciousness. If humans create an AI that is programed to be conscious does it not qualify because it's lines of code or doesn't have a physical body? What does an AI need to exhibit to be sentient?

    "An artificial intelligence must be able to think, perceive, and feel in order to be considered truly sentient"

    https://www.simplilearn.com/what-is-sentient-ai-article

    When I consider what Boston Dynamics has done with robotics and what AI is capable of if it hasn't happened already it's probably just a matter of time. Something like Chappie or I Robot probably isn't too far off.
    They are showing signs of intelligence, but it is very difficult to be sure if what is being learnt is rote behaviour and what if anything is being understood by a gorilla. I lean towards this being intelligence but not consciousness.

    This is more a question of can we put every animal including humans on the same scale of instinctual behaviour to intelligence, then to consciousness or are there demarcations which is what I think.

    Virus / Amoeba / Multicellular / Neural Net / Brains / Human Consciousness.

    To get from Brains to Human Consciousness took millions of years of evolution so forgive me if I don't think you can circumnavigate that by giving a gorilla lessons. :)
    Post edited by Scot on
  • Tobias12345689Tobias12345689 Newbie CommonPosts: 5
    That's awesome
  • nathamolivesnathamolives Newbie CommonPosts: 1
    I think textGPT will bring more benefits
  • gptnederlandsgptnederlands Newbie CommonPosts: 1
    Using textGPT in MMORPGs is an intriguing idea with substantial potential benefits. The prospect of efficiently generating in-game content like quests and NPC dialogue could indeed save developers time and resources, fostering more immersive and personalized player experiences. However, as mentioned, quality control and careful integration with human-created content will be essential to ensure a seamless and enriching gaming environment. It's an exciting development that could shape the future of MMORPGs, and I look forward to seeing how it evolves.
    mekhere
  • eoloeeoloe Member RarePosts: 864
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.



    No. AI researchers already stated that the consciousness problem is solvable technically. More specifically, there is a technical solution for each theory of consciousness. The problem is that nobody agrees on which theory or set of theories to use. So nobody can prove the AI is conscious. The problem is not AI, the problem is what is consciousness.

    IMHO, MemGPT (google it) is a step in a right direction. It manages its own memory according to the current task.

    Moreover, some AI scientists, such as Andrew Ng, consider that AI already has some levels of consciousness, because the AI models build a symbolic representation (like humans) in order to solve problems.

    And this is your biggest mistake: consciousness is not binary 0/1 or consciousness/no consciousness. Consciousness is like a RPG character, it has various levels!

    So, yes the dog or the ant have some levels of consciousness. The dog more than the ant. Stating that only humans have consciousness is entirely misunderstanding how the evolution process works: step by step.

  • NanfoodleNanfoodle Member LegendaryPosts: 10,900
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    2 to 3 years, AI will surpass human intelligence by 5000 times. Scary thing, is it will not have wisdom or the understanding what it weilds. This is not my opinion. This is what the leaders of AI industry is saying. 
  • DammamDammam Member UncommonPosts: 143
    edited October 2023
    Scot said:
    [ ... ]

    To get from Brains to Human Consciousness took millions of years of evolution so forgive me if I don't think you can circumnavigate that by giving a gorilla lessons. :)

    That's not a very accurate statement, since there isn't a singular version of "Brains" that developed millions of years before human consciousness. Rather, it took millions of years of evolution for the human brain to evolve and with it human consciousness, whatever that means exactly. Our brains are very different from those of our evolutionary ancestors, especially if we're going back millions of years. Sure, we share similar structural traits, but there's a lot of complexity in how the structures of the brain are connected and any changes there impact how data, like sensory inputs, get routed and processed. Form and function go hand-in-hand when it comes to the brain.

    AI, currently, does not attempt to mimic the brain's structure as that is far too complex for us to do at the moment. AI attempts to generalize processes in an attempt to simulate some of our (or other creatures') behavior. The current approach to do this, to build things like ChatGPT, is to train models on a large amount of data that contains the types of patterns you want the AI to "distinguish" in order to simulate some behavior in response to those patterns For chat-bots, it's to create a list of words in response to an input list of words. For other systems, it could be identifying features in an image, like recognizing a face. This simulates human behavior without using our underlying brain's architecture, much like a parrot simulates human speech without needing a brain like ours.

    But how far does learning to simulate one form of human intelligence transfer to simulating other types of human intelligence, given that the underlying architecture doesn't actually match the human brain? We don't really know. What's actually surprising, at the moment, is that the transformer architecture underlying large language models like GPT seems to be very good at a variety of different tasks, which is why the idea of a generalized artificial intelligence is once again such a hot topic.
  • JeroKaneJeroKane Member EpicPosts: 7,098
    edited November 2023
    Scot said:

    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.



    Did you ever owned a dog?

    Consciousness is being aware of oneself and it's surroundings.
    A dog is very much aware of itself, it's surroundings, is able to feel affection and love to members in the family.
    Intelligence they are on a level similar to that of small children and behave that way. They are eager to learn, love to play like little kids and show a lot of affection.

    You completely miss the point of consciousness and mixing it up with other traits, like intelligence, instinct and drive!

    All animals have this, including humans. And yes, these are all genetic traits and part of evolution.
    Our entire being is made up of the same DNA as every other living being on this planet, including plants... made up of genetic traits that evolved via evolution. 

    And while the animal kingdom is cruel, where a lot of animals are highly driven by evolutionary instinct...... well so are many humans in this world. We still kill each other over stupid things like resources, territory, drive (sexual, frustration, anger),  money and worse, religion. The horrific things we do to other human beings based on some fantasy book written by someone over 2000 years ago is appalling.

    Your average dog is more intelligent and has more compassion than your average terrorist or criminal (for example).

    Let that sink in for a bit.
    Post edited by JeroKane on
  • jenniferstaleyjenniferstaley Newbie CommonPosts: 1

    I've been experimenting with using ChatGPT for LinkedIn automation, and it's been an interesting journey. Here's a brief guide on how you can leverage ChatGPT for enhancing your LinkedIn activities:

    1. Profile Optimization: Utilize ChatGPT to craft compelling LinkedIn bios and summaries. Input your key skills, achievements, and industry-related buzzwords to generate polished and professional descriptions.

    2. Connection Requests: Generate personalized connection request messages with ChatGPT. Provide it with details about your target audience, and it can help draft connection requests that stand out and initiate meaningful interactions.

    3. Content Scheduling: Experiment with ChatGPT to generate engaging content ideas. Use the model to draft posts, ensuring they align with your professional brand. Schedule these posts using LinkedIn's scheduling feature for consistent content delivery.

    4. Engagement Responses: Train ChatGPT to respond to common engagement on your posts. Whether it's a like, comment, or share, having pre-determined responses can save time and maintain a consistent tone across your interactions..

  • TheocritusTheocritus Member LegendaryPosts: 10,014
    Nanfoodle said:
    Scot said:
    finefluff said:
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    The only animals that have consciousness are human beings. To think that a dog has consciousness is falling into the trap of anthropomorphism, which is a word that has fallen out of favour these days as people on social media have pushed the idea their pets have feelings and the like.

    I actually half agree with both of you, AI is no where near self aware, but it is heading that way. I think it will take decades yet, maybe even until the end of the century but it is coming.


    2 to 3 years, AI will surpass human intelligence by 5000 times. Scary thing, is it will not have wisdom or the understanding what it weilds. This is not my opinion. This is what the leaders of AI industry is saying. 

    Theres a reason why it is called Artificial Intelligence and not Artificial Wisdom.....Our society as a whole has more intelligence available than it ever has before, but wisdom is the one that is much harder to find.
  • Jordan68Jordan68 Newbie CommonPosts: 1
    edited December 2023
    TextGPT promises abundant benefits.
  • ScotScot Member LegendaryPosts: 24,423
    Jordan68 said:
    TextGPT promises abundant benefits.
    You altered your post but did not put a link in...

    So I am going to say Welcome to the Forums! :)
  • Ankit_TiwariAnkit_Tiwari Newbie CommonPosts: 4
    finefluff said:

    Hello everyone,

    I wanted to share my thoughts on how the new textGPT technology could potentially benefit MMORPGs. For those who may not be familiar, textGPT is a language processing AI that is able to generate human-like text based on a given prompt.

    One of the biggest challenges that MMORPGs face is creating and maintaining engaging content for players. With textGPT, developers could potentially use the AI to generate quests, NPC dialogue, and other in-game text quickly and efficiently. This could save developers a significant amount of time and resources, and allow them to focus on other aspects of the game.

    Another potential benefit of textGPT is that it could allow for more dynamic and personalized content in MMORPGs. The AI could be used to generate unique quest lines or NPC dialogue based on a player's choices and actions, making the game feel more immersive and tailored to the player's experience.

    Of course, there are also potential concerns about the use of textGPT in MMORPGs. One concern is that the AI-generated content might not be as high quality as content created by human writers. However, I believe that with careful planning and oversight, textGPT could be used to supplement and enhance the work of human writers, rather than replacing them.

    Overall, I think that textGPT has the potential to be a valuable tool for MMORPG developers. By streamlining the content creation process and allowing for more dynamic and personalized content, textGPT could help to make MMORPGs even more immersive and engaging for players.

    What do you all think? Do you see potential benefits or concerns with using textGPT in MMORPGs? Share your thoughts in the comments below.

    Thanks for reading!

    finefluff said:

    Hello everyone,

    I wanted to share my thoughts on how the new textGPT technology could potentially benefit MMORPGs. For those who may not be familiar, textGPT is a language processing AI that is able to generate human-like text based on a given prompt.

    One of the biggest challenges that MMORPGs face is creating and maintaining engaging content for players. With textGPT, developers could potentially use the AI to generate quests, NPC dialogue, and other in-game text quickly and efficiently. This could save developers a significant amount of time and resources, and allow them to focus on other aspects of the game.

    Another potential benefit of textGPT is that it could allow for more dynamic and personalized content in MMORPGs. The AI could be used to generate unique quest lines or NPC dialogue based on a player's choices and actions, making the game feel more immersive and tailored to the player's experience.

    Of course, there are also potential concerns about the use of textGPT in MMORPGs. One concern is that the AI-generated content might not be as high quality as content created by human writers. However, I believe that with careful planning and oversight, textGPT could be used to supplement and enhance the work of human writers, rather than replacing them.

    Overall, I think that textGPT has the potential to be a valuable tool for MMORPG developers. By streamlining the content creation process and allowing for more dynamic and personalized content, textGPT could help to make MMORPGs even more immersive and engaging for players.

    What do you all think? Do you see potential benefits or concerns with using textGPT in MMORPGs? Share your thoughts in the comments below.

    Thanks for reading!

    finefluff said:

    Hello everyone,

    I wanted to share my thoughts on how the new textGPT technology could potentially benefit MMORPGs. For those who may not be familiar, textGPT is a language processing AI that is able to generate human-like text based on a given prompt.

    One of the biggest challenges that MMORPGs face is creating and maintaining engaging content for players. With textGPT, developers could potentially use the AI to generate quests, NPC dialogue, and other in-game text quickly and efficiently. This could save developers a significant amount of time and resources, and allow them to focus on other aspects of the game.

    Another potential benefit of textGPT is that it could allow for more dynamic and personalized content in MMORPGs. The AI could be used to generate unique quest lines or NPC dialogue based on a player's choices and actions, making the game feel more immersive and tailored to the player's experience.

    Of course, there are also potential concerns about the use of textGPT in MMORPGs. One concern is that the AI-generated content might not be as high quality as content created by human writers. However, I believe that with careful planning and oversight, textGPT could be used to supplement and enhance the work of human writers, rather than replacing them.

    Overall, I think that textGPT has the potential to be a valuable tool for MMORPG developers. By streamlining the content creation process and allowing for more dynamic and personalized content, textGPT could help to make MMORPGs even more immersive and engaging for players.

    What do you all think? Do you see potential benefits or concerns with using textGPT in MMORPGs? Share your thoughts in the comments below.

    Thanks for reading!

    Do you think using textGPT in MMORPGs, like helping create quests and NPC dialogues, is a good idea or could it make the game less fun?

Sign In or Register to comment.