Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Why are the update times so dammed long?

2»

Comments

  • prowessprowess Member UncommonPosts: 169
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    Oh yeah, and to add, I would imagine that the backup/restore solution does not EVER give an accurate estimated time of completion, so it may be much quicker than "this evening."

     

    And, yeah, I would imagine they're doing the updates by hand since there are so many moving pieces..  I would imagine that the login server's update was really simple..  the patching servers probably just needed the new content added, and was also very simple and clean..  the game servers probably updated without a hitch..  but updating the database engine is probably arduous and there's probably a very limited staff who can actually perform this update...  But I would imagine that the game-world data and the character data is safe and intact as the situations in which you would need to alter the character tables is really rare...  the game-world database probably needed some changes made...

     

    I apologize for all of the rambling...  I've been drinking a lot of coffee.

  • kDeviLkDeviL Member UncommonPosts: 215

    MMO players...

     

    Post 1 - "Omg the bugs, so many bugs!"

    Post 2 - "Omg they're fixing bugs and I can't play!"

    If WoW was released today even in its' entirety it would be f2p in 3 months.
    Why is it still such a big deal?

  • mbrodiembrodie Member RarePosts: 1,504
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    you realise zenimax is one of the "richest" privately owned video game companies in North America right, with deep ties in the movie industry and some actual high profile people on their board... a guildy looked it up it was actually fairly interesting information

     

    ZeniMax is the largest privately held video games company in North America[5]

    ZeniMax Media was reportedly valued at about USD $1.2 billion, in 2007.[6][7]

    Corporate governance[edit]

    The company's Board of Directors consists of 8 individuals:

  • CharminUltra561CharminUltra561 Member Posts: 4
    Originally posted by mbrodie
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    you realise zenimax is one of the "richest" privately owned video game companies in North America right, with deep ties in the movie industry and some actual high profile people on their board... a guildy looked it up it was actually fairly interesting information

     

    ZeniMax is the largest privately held video games company in North America[5]

    ZeniMax Media was reportedly valued at about USD $1.2 billion, in 2007.[6][7]

     

     

    Doesn't mean they have those resources allocated to the department running this game.  Don't assume they aren't waiting to see a good ROI and get their recurring revenue stream flowing before they invest millions more into the game.

  • mbrodiembrodie Member RarePosts: 1,504
    Originally posted by CharminUltra561
    Originally posted by mbrodie
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    you realise zenimax is one of the "richest" privately owned video game companies in North America right, with deep ties in the movie industry and some actual high profile people on their board... a guildy looked it up it was actually fairly interesting information

     

    ZeniMax is the largest privately held video games company in North America[5]

    ZeniMax Media was reportedly valued at about USD $1.2 billion, in 2007.[6][7]

     

     

    Doesn't mean they have those resources allocated to the department running this game.  Don't assume they aren't waiting to see a good ROI and get their recurring revenue stream flowing before they invest millions more into the game.

    if they were going to invest in upgrading server technology they would have done it already, the point is assuming they didnt do it because they dont have the money is silly, the reason they wont do it now is because more often then not and market research has shown that numbers die down more then they increase before the balance off and it isnt worth investing more in the servers.

  • hayes303hayes303 Member UncommonPosts: 434

    Anyone else notice that games seemed to have cleaner launches before betas became a marketing tool? WoW was a exception because Blizzard didn't forecast how popular it was going to be (that launch was brutal).

    This game needed to stay in the oven for another month of serious beta, not the try before you buy, win a free key for showing up beta it had.

  • prowessprowess Member UncommonPosts: 169
    Originally posted by mbrodie
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    you realise zenimax is one of the "richest" privately owned video game companies in North America right, with deep ties in the movie industry and some actual high profile people on their board... a guildy looked it up it was actually fairly interesting information

     

    ZeniMax is the largest privately held video games company in North America[5]

    ZeniMax Media was reportedly valued at about USD $1.2 billion, in 2007.[6][7]

    Corporate governance[edit]

    The company's Board of Directors consists of 8 individuals:

    "expensive" doesn't necessarily mean in dollars...  Throwing money at IT problems creates a lot of solutions, but not all!

     

    It's likely very difficult to accurately script this sort of thing out...  it's quite possible that they DID have scripts in place to streamline the update process but it went wrong and now they're doing it by hand...  however, i've never seen a database cluster scripted out for updates..  it's almost always a manual process.

  • CharminUltra561CharminUltra561 Member Posts: 4
    Originally posted by mbrodie
    Originally posted by CharminUltra561
    Originally posted by mbrodie
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    you realise zenimax is one of the "richest" privately owned video game companies in North America right, with deep ties in the movie industry and some actual high profile people on their board... a guildy looked it up it was actually fairly interesting information

     

    ZeniMax is the largest privately held video games company in North America[5]

    ZeniMax Media was reportedly valued at about USD $1.2 billion, in 2007.[6][7]

     

     

    Doesn't mean they have those resources allocated to the department running this game.  Don't assume they aren't waiting to see a good ROI and get their recurring revenue stream flowing before they invest millions more into the game.

    if they were going to invest in upgrading server technology they would have done it already, the point is assuming they didnt do it because they dont have the money is silly, the reason they wont do it now is because more often then not and market research has shown that numbers die down more then they increase before the balance off and it isnt worth investing more in the servers.

    I don't know what they have or have not done, however I do know the systems (not just server tech) to do this quickly and efficiently are expensive.  A company reaching out into a new space is only going to allocate X amount of resources to that project until it is proven successful.  If that prevented them from getting something that they needed or even having more headcount on their staff so that they can apply these things quicker then that COULD account for the long maintenance window.

    On top of that even the best systems fail lol so sh*t happens right

  • mbrodiembrodie Member RarePosts: 1,504
    Originally posted by prowess
    Originally posted by mbrodie
    Originally posted by CharminUltra561
    Originally posted by movros99
    Originally posted by prowess

    my creds: I perform updates and maintenance on enterprise infrastructure for a living.

     

    My guess is that for this "megaserver" they're using a cluster of hosts and many-MANY virtual machines.  They have virtual machines that serve the logins, serve the patching, serve the game, and function as the database-backend.  For the update, they have to install updates and reboot each individual VM...  If they're doing a database engine update, the DB-cluster must be rotated precisely.  This will usually have to be done via cmd line...  One slip-up here and you've got to restore a backup and start over.

    Yeesh.  This sounds tedious and creates an environment prone to bugs and...um well yea.  This makes sense.

    To your point it is long and tedious however the systems they would need in place to do it quicker are more than likely more expensive then they can afford at the moment.  

    I work on the Dev side of enterprise apps but I'm decently familiar with the problems our deployment teams go through and the MegaServer theory is probably exactly what is happening.

    They will eventually refine and fine tune the process and create batch applications that can update multiple VMs safely.  For all we know they are doing it by hand >

    Also, if they did mess up an they have to rollback the restore can fail or timeout and they have to try again.  This could go on for a long time lol.  Just be patient

    you realise zenimax is one of the "richest" privately owned video game companies in North America right, with deep ties in the movie industry and some actual high profile people on their board... a guildy looked it up it was actually fairly interesting information

     

    ZeniMax is the largest privately held video games company in North America[5]

    ZeniMax Media was reportedly valued at about USD $1.2 billion, in 2007.[6][7]

    Corporate governance[edit]

    The company's Board of Directors consists of 8 individuals:

    "expensive" doesn't necessarily mean in dollars...  Throwing money at IT problems creates a lot of solutions, but not all!

     

    It's likely very difficult to accurately script this sort of thing out...  it's quite possible that they DID have scripts in place to streamline the update process but it went wrong and now they're doing it by hand...  however, i've never seen a database cluster scripted out for updates..  it's almost always a manual process.

    thats also assuming their setup is what you think it is, it could be completely different to a bunch of virtual machines and such as you suggested earlier.. this isnt a debate, i'm just saying ZOS is a rich company, but it's their first MMO and dealing with server tech so they're probably not that great at doing it just yet.. as per my very first post.

  • CharminUltra561CharminUltra561 Member Posts: 4
    Originally posted by prowess
    however, i've never seen a database cluster scripted out for updates..  it's almost always a manual process.

    ^ this. LOL  I dont know why no one has figured that out yet.  Figure it out prowess and you got a million dollar idea there

  • Octagon7711Octagon7711 Member LegendaryPosts: 9,004
    Originally posted by Reklaw
    Originally posted by keithian
    Its good to see all this whining with new threads every time there is maintenance or a downtime because that means that people are anxious to keep playing which is a sign of a good game.

    I came home early 13:00 this afternoon, didn't think about the maintenance scedule today. Did read the EU got it this morning but wasn't really bothered since I play on the US server.

    So around 14:00hours I decided to give the game a spin, but was met with the message the server is coming down for maintenance.

    So yeah I really want to play....booted up Assassins Creed: BF, after about 20 minutes I tried Thief, but was really set into this ES fibe.

    So I decided to download the EU version, figured since I am EU I might aswell. Might profit from it when the game actually has EU baed server. Anyway.....finished the download, pressed PLAY and to my suprise EU servers also have been taken offline due to maintenance.

    Now can't play neither EU or US .....

    Meh! :P

    Someone didn't check the eso forums.

    "We all do the best we can based on life experience, point of view, and our ability to believe in ourselves." - Naropa      "We don't see things as they are, we see them as we are."  SR Covey

  • prowessprowess Member UncommonPosts: 169
    Originally posted by CharminUltra561
    Originally posted by prowess
    however, i've never seen a database cluster scripted out for updates..  it's almost always a manual process.

    ^ this. LOL  I dont know why no one has figured that out yet.  Figure it out prowess and you got a million dollar idea there

    Trust me, I'm trying!  lol

  • DignaDigna Member UncommonPosts: 1,994

    Hm I didn't kn ow the servers were back. lol

     

    back at it.

  • Gobstopper3DGobstopper3D Member RarePosts: 970
    Reminds me of the early days of Eve-online.  This is nothing compared to what those if us back then went through with that game!  Having mega servers comes with it's own set of problems.

    I'm not an IT Specialist, Game Developer, or Clairvoyant in real life, but like others on here, I play one on the internet.

  • Yoda_CloneYoda_Clone Member Posts: 219
    Originally posted by Digna

    Hm I didn't kn ow the servers were back. lol

     

    back at it.

    Servers may be back, but can't log in.

  • VicDynamoVicDynamo Member Posts: 234
    The Mega server dropped the tray?
  • DignaDigna Member UncommonPosts: 1,994
    Originally posted by Yoda_Clone
    Originally posted by Digna

    Hm I didn't kn ow the servers were back. lol

     

    back at it.

    Servers may be back, but can't log in.

    I was in when I posted.

  • Octagon7711Octagon7711 Member LegendaryPosts: 9,004

    People are having those old launcher problems.  

    209 error, unable to detect patch version stopping for repair

    Error 209-PatchManifestError_VersionFail

    "We all do the best we can based on life experience, point of view, and our ability to believe in ourselves." - Naropa      "We don't see things as they are, we see them as we are."  SR Covey

  • WickedjellyWickedjelly Member, Newbie CommonPosts: 4,990
    Originally posted by Octagon7711

    People are having those old launcher problems.  

    209 error, unable to detect patch version stopping for repair

    Error 209-PatchManifestError_VersionFail

    Yep.

    Know I'm having this issue now.

    1. For god's sake mmo gamers, enough with the analogies. They're unnecessary and your comparisons are terrible, dissimilar, and illogical.

    2. To posters feeling the need to state how f2p really isn't f2p: Players understand the concept. You aren't privy to some secret the rest are missing. You're embarrassing yourself.

    3. Yes, Cpt. Obvious, we're not industry experts. Now run along and let the big people use the forums for their purpose.

Sign In or Register to comment.