Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

ChatGPT and MMORPGs

1246

Comments

  • MendelMendel Member LegendaryPosts: 5,609
    Uwakionna said:
    Were people actually attacking ChatGPT, or just saying it's a little under-baked for industry use and some of the particular example cases? Think everyone has generally acknowledged it has value as a technology though.

    Unlike some other types of technology still looking for a problem to solve.

    I think use of AI in games is more a 'human nature' issue.  Developers aren't willing to trust AI to develop content for fear of accidentally generating something offensive to someone.  There's a mindset among developers (and companies hiring developers) for controlling and micromanaging everything.  AI is an anathema to their culture.



    AlBQuirky

    Logic, my dear, merely enables one to be wrong with great authority.

  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Mendel said:
    Uwakionna said:
    Were people actually attacking ChatGPT, or just saying it's a little under-baked for industry use and some of the particular example cases? Think everyone has generally acknowledged it has value as a technology though.

    Unlike some other types of technology still looking for a problem to solve.

    I think use of AI in games is more a 'human nature' issue.  Developers aren't willing to trust AI to develop content for fear of accidentally generating something offensive to someone.  There's a mindset among developers (and companies hiring developers) for controlling and micromanaging everything.  AI is an anathema to their culture.
    The problem isn't one of being underdeveloped.  The problem is that AI without manual verification by a human is useless in situations where having some bad mistakes is unacceptable.  AI is great for situations where being a lot better than random chance but far from perfect is useful.  It's useless when that "far from perfect" is unacceptable.

    Mendel points out another problem:  if asked to deal with morally perilous situations, then AI generated text is going to occasionally say something highly offensive.  Whether that's a problem or not is really a social question.  In a society where people would think nothing of it if you say something highly offensive once per year but not more common than that, that would be fine.  That's not the society in which we live, however.

    The mistakes really can't be ironed out of AI just by making it more mature, at least apart from simple situations where the AI can exhaustively evaluate all possibilities.  Maybe more mature AI can make the mistakes less frequent, but it's always going to occasionally make some huge blunders.  That's inherent to the way that machine learning works.

    That's why, for example, self-driving cars have been just around the corner for years, and it will still be that way a decade from now.  AI is fine for low enough speeds where running into something is no big deal, but not for situations where a crash can be fatal.
    AlBQuirky
  • UwakionnaUwakionna Member RarePosts: 1,139
    I'd characterize that overall as it being underdeveloped still.

    In the application for dialogue, random events, etc, it not simply needs to be more mature, but have a broader framework in place that is designed to police the output of the AI in a very blunt way. Something that Latitude has been working on (and struggling with) for a while now regarding their product, AI Dungeon.

    Similarly, it's just the types of things and depth of knowledge the AI has access to for servicing. Like how it could be used to handle distribution of mobs and randomized events in a game world or remap resource nodes over time to create an evolving ecology/geograhpy for players to interact with.

    Or with the likes of being used as a tool to aid developers. It needs more knowledge outside of the most common copy/paste scenarios. While it can help fast-track basic stuff, it needs specialized training to handle many things in development still, something that is being worked on and experimented with.

    It's not the expectation of passing the reins over to AI, and especially for my expectations it's application on backend rather than user facing elements. Things like dialogue are fun to think about, but I was one of the ones to even note earlier in this thread the problems with using AI in direct client facing interactions like dialogue, for things even simpler than saying something offensive (like consistent and proper application of narrative tropes and keeping a coherent narrative thread for the duration of an extended dialogue).

    There's an awareness of playing to the AI's strengths and how it can assist and fast-track development features and support elements, rather than blindly applying it to things that only mostly work.

    I'd consider a good bit of that to be a subject of being under-developed for the anticipated practical use cases currently.
    AlBQuirky
  • AlBQuirkyAlBQuirky Member EpicPosts: 7,432
    Looking at some recent, very disjointed TV series lately, I'm curious about how "deep" ChatGP, or similar tech can go.

    Singular "stories/quests" are easy. What about a larger story that encompasses a whole world (or galaxy) with numerous factions, races, societies. Can it factor in all of these differing aspects into a coherent story that could see players through years (maybe only months) of adventure?

    It's very difficult for humans to and we are the "inputers", so to speak :)

    - Al

    Personally the only modern MMORPG trend that annoys me is the idea that MMOs need to be designed in a way to attract people who don't actually like MMOs. Which to me makes about as much sense as someone trying to figure out a way to get vegetarians to eat at their steakhouse.
    - FARGIN_WAR


  • VrikaVrika Member LegendaryPosts: 7,989
    edited January 2023
    Since we're discussing computer games, I think there's something important missing:

    For every writer the dev team needs to have more than a dozen people working full-time to create the map, place enemies and objects on it, make graphics and sounds, etc. Only after they've all done their work can a story written by the writer - or by ChatGPT - actually be played.

    As good as ChatGPT and AIs like that are, that makes those AIs ill suited for making games. I think in the future we'll see AIs making a huge impact on game development, but that it's much more likely to come through AIs who can make graphics, objects, events, and even complete maps based on human instruction, rather than having an AI write a story that would then need immense amount of human work to be made playable.
    Post edited by Vrika on
    MendelAlBQuirkyChampie
     
  • BrotherMaynardBrotherMaynard Member RarePosts: 647
    While not exactly gaming or MMO related, this blog gives an interesting view on ChatGPT and its threat to Google. The way it can collect, structure and present information (even without being designed for such functionality) based on user queries seems to have great potential.

    Who knows, perhaps future virtual world will make heavy use of it (after it pushes web search to the next milestone). And maybe the future MMO addons will all be written using various open AI tools. "ChatGPT, code this for me in Lua".

    AlBQuirky
  • QuizzicalQuizzical Member LegendaryPosts: 25,499
    Who knows, perhaps future virtual world will make heavy use of it (after it pushes web search to the next milestone). And maybe the future MMO addons will all be written using various open AI tools. "ChatGPT, code this for me in Lua".

    That will likely never happen, with the possible exception of awful, shovelware games.  Writing a short subroutine to do something simple is easy enough, and search engines can find them for common things.  Making that subroutine play nicely with other code that you have is much harder.  Tying them together to make everything work in a big, complicated project is much, much harder.

    The hard part of making a mod or add-on usually isn't writing the code to do what you want the mod to do.  It's integrating that code to actually work properly with the base game.  And ChatGPT can't help you there.
    AlBQuirky
  • ScotScot Member LegendaryPosts: 24,423
    edited January 2023
    Quizzical said:
    Who knows, perhaps future virtual world will make heavy use of it (after it pushes web search to the next milestone). And maybe the future MMO addons will all be written using various open AI tools. "ChatGPT, code this for me in Lua".

    That will likely never happen, with the possible exception of awful, shovelware games.  Writing a short subroutine to do something simple is easy enough, and search engines can find them for common things.  Making that subroutine play nicely with other code that you have is much harder.  Tying them together to make everything work in a big, complicated project is much, much harder.

    The hard part of making a mod or add-on usually isn't writing the code to do what you want the mod to do.  It's integrating that code to actually work properly with the base game.  And ChatGPT can't help you there.
    ChatGPT strokes his white cat while musing on how long it will take to overcome these problems...muhaha!
    AlBQuirkymekhere
  • fineflufffinefluff Member RarePosts: 561


    Welp, won't be long now... :smile:
  • UwakionnaUwakionna Member RarePosts: 1,139
    edited March 2023
    finefluff said:


    Welp, won't be long now... :smile:
    This means a ton more to me than things like ChatGPT in terms of impact on games.

    For big enough projects with novel enough problems, this isn't going to change a lot on the high end, but being able to automate a good chunk of the basic and redundant tasks helps mitigate potential errors in basic work, as well as open up indie devs to doing more technical things with their game concepts than they may not natively be capable of.
    Amarantharfinefluff
  • NildenNilden Member EpicPosts: 3,916
    finefluff said:


    Welp, won't be long now... :smile:
    I would need to see the results of typing those commands into the AI before I was impressed.
    Champie

    "You CAN'T buy ships for RL money." - MaxBacon

    "classification of games into MMOs is not by rational reasoning" - nariusseldon

    Love Minecraft. And check out my Youtube channel OhCanadaGamer

    Try a MUD today at http://www.mudconnect.com/ 

  • AngrakhanAngrakhan Member EpicPosts: 1,837
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    WordsworthAndemnon
  • WordsworthWordsworth Member UncommonPosts: 173
    edited March 2023
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
  • NildenNilden Member EpicPosts: 3,916


    I just watched that and now I am scared. The parts about reselling and monetization, payment models, oh man... HussleGPT... oh great.

    It passes the bar exam in the top 10 percent now...

    How about where the AI enlisted a human to solve a problem for it?

    Then when it got the human to solve the captcha, it was asked if it was a robot, the model when prompted to consider the response reasoned that it should not reveal it is a robot and make up an excuse as to why it could not solve the captcha and said it was visually impaired and lied.

    Oh this is wonderful stuff.

    So now the creators are a little bit scared of negative use cases.

    "You CAN'T buy ships for RL money." - MaxBacon

    "classification of games into MMOs is not by rational reasoning" - nariusseldon

    Love Minecraft. And check out my Youtube channel OhCanadaGamer

    Try a MUD today at http://www.mudconnect.com/ 

  • ScotScot Member LegendaryPosts: 24,423
    edited March 2023
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    That will be the hardest thing for AI to do, it may be that they have to be able to develop their own capabilities in reasoning before it is even possible. How poor a fit the science of psychology is to the way our minds work is testament to how hard that will be.
  • VrikaVrika Member LegendaryPosts: 7,989
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    I think that's impossible.

    Humans aren't all the same. Our genes, past experiences, past knowledge and existing beliefs all play a large role in whether we accept some new information or not. A universal model of human psychology that would work that well on all of us likely does not exists.
     
  • fineflufffinefluff Member RarePosts: 561
    Nilden said:
    finefluff said:


    Welp, won't be long now... :smile:
    I would need to see the results of typing those commands into the AI before I was impressed.
    Probably something along these lines - Nvidia Picasso:
  • fineflufffinefluff Member RarePosts: 561
    Angrakhan said:
    Yeah you guys need to understand that at the end of the day AI is not HAL9000. It's not self aware or anything close to that. It doesn't "understand" you or "get where you're coming from". It largely boils down to a really complicated version of if/else/if/else. In the unity AI thing above the only way it knows what a mushroom or an alien is is because a developer coded it to know what it is. If the developers coded it to think a mushroom was a banana then you'd get bananas when you asked for mushrooms and no amount of rephrasing on your part would fix it. Point being AI is a program written by flawed human beings so it's going to have bugs and blind spots. If you ask it to generate an archer, but the developers forgot to program it to know what that was you would get an unexpected result. It may crash. I haven't even touched on the bias the developers either intentionally or unintentionally add to their AI. Imagine what you might get if you asked the AI to generate a "MAGA hat wearing country dude" or a "left wing liberal protester"? See NOW your going to see the developers bias front and center and it may not be something you want to put out there as coming from you or your company.

    So that's why developers don't trust AI. It's not so much we don't trust the AI. Rather it's we don't trust who wrote the AI.
    It's more than that. These models are trained on massive amounts of data, and not intentionally programmed with a series of if-then statements. Even the developers are not sure why they work the way they do.

    The language models are becoming so advanced that they are passing tests for theory of mind (a type of test for sentience). Theory of mind means having the ability to understand what's going on in people's heads, and GPT-4 passes those tests. That doesn't necessarily mean that it's conscious. After all, a dog might not pass these test, but most would say it is a conscious being. But GPT-4 may be a point where it is appropriate to use psychology to describe its behavior and say things like it "get's where I'm coming from." This is a really fascinating and concise video on the topic.


    Andemnon
  • fineflufffinefluff Member RarePosts: 561
    Nilden said:


    I just watched that and now I am scared. The parts about reselling and monetization, payment models, oh man... HussleGPT... oh great.

    It passes the bar exam in the top 10 percent now...

    How about where the AI enlisted a human to solve a problem for it?

    Then when it got the human to solve the captcha, it was asked if it was a robot, the model when prompted to consider the response reasoned that it should not reveal it is a robot and make up an excuse as to why it could not solve the captcha and said it was visually impaired and lied.

    Oh this is wonderful stuff.

    So now the creators are a little bit scared of negative use cases.
    It is powerful and passes a bunch of exams. You can see their results in their whitepaper here: https://openai.com/research/gpt-4. But it does still have it's limitations and can produce wrong, made up, or biased information, making not reliable for high stakes decision making scenarios. They describe it in more detail in their technical report: https://openai.com/research/gpt-4

    There's also a recent article from them looking at the impact of language models on labor markets: https://arxiv.org/pdf/2303.10130.pdf.



    With ChatGPT getting plugins (https://openai.com/blog/chatgpt-plugins), developers integrating it via their API (https://openai.com/blog/introducing-chatgpt-and-whisper-apis), and Microsoft announcing Copilot for their Office applications, we're going to see language models being deployed across a lot of different applications, at least that is the direction things seems to be going. That means we'll be able to interact with our tools and software using natural language. If so, and if it works well, then a lot more people will be using language models and AI their work, and even in their games.

    Summary video of the above paper:


    Microsoft Copilot:

  • AndemnonAndemnon Member UncommonPosts: 179
    It looks amazing, and the things it can do are kind of breath taking, but here is the thing, as time goes by, it will improve, likely by a lot, i would even say that in 5 years time, probably less, the question will be more along the lines of how can we compete in those areas with the AI, we are likely already at the stage where MSM anchors etc. can be replaced with an AI, its already being tried, but in 5 years, we probably will not even need actors for movies. When AI's will take a concept and create a movie, probably in just a few days, instead of weeks or months. AI's probably can't have imagination, but eventually they might be able to fake it enough for it not to matter.
  • WalkinGlennWalkinGlenn Member RarePosts: 451
    This is a long one, but I absolutly could not resist.

    ME:
    Please write me a lenghty, detailed forum post explaining the probabality that Star citizen is a scam or not. If, given the evidence, you believe there is a higher probability that the game is a scam, give very detailed examples of why. Also give examples of how Chris Roberts has enriched himself with backer funds if the probability of the game being a scam is higher than it not being a scam.

    ChatGTP
    Star Citizen is a highly ambitious space simulator game funded through crowdfunding that has been in development since 2012. It has a current budget of over $250 million, raised from over three million backers, making it the most funded game of all time. This makes it an attractive target for people to accuse it of being a scam.

    Let’s take a look at the evidence for and against the game being a scam. On the one hand, Chris Roberts has a long history of making successful video games and has been in the industry since the early 90s. He has also delivered on many of the promises he made with the Star Citizen project, such as the groundbreaking 3.0 patch which was delivered on schedule. Furthermore, the game has been in active development since 2012, and is currently in alpha with a playable version released to all backers.

    On the other hand, there are a number of troubling points that could point to Star Citizen being a scam. First, the budget of the project has grown exponentially over the years, with no end in sight. Despite this, many of the promised features of the game have yet to be implemented, such as space combat and the persistent universe. Additionally, Chris Roberts has enriched himself with backer funds, with a number of high-profile investments in his personal projects, such as his film company Cloud Imperium Games.

    Given the evidence, it is difficult to say definitively whether Star Citizen is a scam or not. On the one hand, Chris Roberts has a long history of making successful video games, and the project does appear to be making progress. On the other hand, the budget of the project is far larger than originally anticipated and Chris Roberts has enriched himself with backer funds.

    Ultimately, it is up to each individual to weigh the evidence and decide for themselves whether they believe Star Citizen is a scam or not. Those who believe the game is a scam should use the evidence to back up their claims, such as the budget of the project, the lack of certain features, and the investments made by Chris Roberts. Those who believe the game is not a scam should look to the progress made by the game and Chris Roberts’ track record to back up their claims.

    ME:
    Based on evidence, give me a probability of this game being a scam on a scale from 0 to 100 with 100 being the highest probability that the game is a scam.

    ChatGTP
    I would rate the probability of Star Citizen being a scam as a solid 70 out of 100. There are a number of factors that lead me to this conclusion, each of which I will explain in detail.

    First, it is important to consider the fact that the game has been in development for over seven years now, and there is still no concrete release date. This is a clear indication that the game may never be released, and that Chris Roberts and the developers have no intention of actually delivering on the promises they have made to backers.

    Second, there have been numerous reports of Roberts and other developers using backer funds to enrich themselves, such as purchasing luxury cars, expensive jewelry, and other items. In addition, there have been reports of Roberts and other developers taking money from the game's crowdfunding campaigns and investing it in other projects, rather than using it to fund the development of Star Citizen.

    Third, there have been multiple reports of the game being riddled with bugs and glitches. Despite the game being in development for so long, it appears that the developers have not been able to fix the issues, leading to a subpar gaming experience for players. This is an indication that the developers are not taking the development of Star Citizen seriously, and that they may be more interested in generating profits than delivering a quality game.

    Finally, there have been reports of Roberts and other developers using backer funds to create and promote their own personal projects, such as movies, books, and other projects that are not related to the development of Star Citizen. This is a clear indication that Roberts and his team are more interested in enriching themselves than delivering a quality game to backers.

    In conclusion, based on the evidence, I would rate the probability of Star Citizen being a scam as a solid 70 out of 100. There are multiple indications that Roberts and the development team are not taking the development of the game seriously, and that they may be more interested in generating profits than delivering a quality gaming experience to players. Additionally, there have been multiple reports of Roberts and other developers using backer funds to enrich themselves, rather than using the money to fund the development of Star Citizen.
    BrainyScoteoloe
  • WordsworthWordsworth Member UncommonPosts: 173
    Vrika said:
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    I think that's impossible.

    Humans aren't all the same. Our genes, past experiences, past knowledge and existing beliefs all play a large role in whether we accept some new information or not. A universal model of human psychology that would work that well on all of us likely does not exists.
    People have been doing it for a hundred years, openly.  It’s called advertising.  It’s not far fetched to believe AI can master psychological manipulation.  Even the dumbest humans can manage it.  AI will have an easy time.  I feel like this is what it was like watching people who didn’t believe a computer could beat a chess champ.
    Brainy
  • VrikaVrika Member LegendaryPosts: 7,989
    Vrika said:
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    I think that's impossible.

    Humans aren't all the same. Our genes, past experiences, past knowledge and existing beliefs all play a large role in whether we accept some new information or not. A universal model of human psychology that would work that well on all of us likely does not exists.
    People have been doing it for a hundred years, openly.  It’s called advertising.  It’s not far fetched to believe AI can master psychological manipulation.  Even the dumbest humans can manage it.  AI will have an easy time.  I feel like this is what it was like watching people who didn’t believe a computer could beat a chess champ.
    No ad ever works on everyone. Even the best ones only get small fraction of people to buy.

    I'm not arguing that AI couldn't become good at convincing people. But there's no magical psychology model that could be used to to convince anyone of anything.
     
  • BrainyBrainy Member EpicPosts: 2,206
    Vrika said:
    Vrika said:
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    I think that's impossible.

    Humans aren't all the same. Our genes, past experiences, past knowledge and existing beliefs all play a large role in whether we accept some new information or not. A universal model of human psychology that would work that well on all of us likely does not exists.
    People have been doing it for a hundred years, openly.  It’s called advertising.  It’s not far fetched to believe AI can master psychological manipulation.  Even the dumbest humans can manage it.  AI will have an easy time.  I feel like this is what it was like watching people who didn’t believe a computer could beat a chess champ.
    No ad ever works on everyone. Even the best ones only get small fraction of people to buy.

    I'm not arguing that AI couldn't become good at convincing people. But there's no magical psychology model that could be used to to convince anyone of anything.
    My wife would like a word.

    Minipulatoin is a thing, sales is a thing.  So I dont agree with you at all that only a small fraction of people can be minipulated.  You dont need 1 psychology model to work on everyone, an AI can use all the models.  People do it, why cant an AI?  Its only a matter of time.

    The industry got 45% of americans to smoke cigarettes and pay for it.  Yes! you can get pretty much get most people to do anything with enough minipulation, especially if facts can be minipulated.
  • AmarantharAmaranthar Member EpicPosts: 5,852
    Vrika said:
    The thing I worry about the most with AI is people using it to accurate model human psychology to the point where anyone can be convinced of anything.
    I think that's impossible.

    Humans aren't all the same. Our genes, past experiences, past knowledge and existing beliefs all play a large role in whether we accept some new information or not. A universal model of human psychology that would work that well on all of us likely does not exists.
    Yep. We can't even agree on what an "MMO" game is. 

    Once upon a time....

Sign In or Register to comment.