• Hi guys, we have this little problem, actually its a great big monster of one but I didnt want to scare anyone on my first post.

    We are running 11.6.sp3 in Design TTY mode through a batch process, we have two macros running simulataneously (two seperate spawn process from syscom via a batch entry) that are copying HUGE quantaties of data from a number of databases into one new almost empty db, which is a design db, multiwrite, implicit claim. The new db is created before the macros run, so we know its both clean and empty. Anyway after a considerable period of frantic activity the macros do a savework (which could be at the same time, or maybe not). Invariably, but not always the second macro to savework crashes, with the error code above, now thatnks to someone on here copying the error codes I know what it means, actually I know what it says it means, not exactly the same thing.

    So, which one of you guys has any idea what a 503 really means, my gut feeling is there is just too much data going down the poor little wire and its frying its brains out, if it does succeed (and it does sometimes) then the file size is about 0.3 Gb, so not enormous.

    I dont want to do a savework more frequently as experience has shown me frequent savework clogs the poor little pcs brain and gets slower and slower, the batch job doesnt have time for this. We have tried it on a copy project, so we had no users and still it crashes, only occasionally, just like the real Project. I suppose I could create a number of dbs and get the copy data into those rather than a single db, but I'm a lazy bloke and I want an easy life. We have to copy the files across network to our other offices as we arent convinced Global can handle the transfer in the time slot.

    Oh the joys of running Global Multi Office Projects, in different time zones obviously (as opposed to being indifferent of course)

    So thinking caps on and get scribbling an answer please.

    Matt
  • I do not get the drift of the two macro simulataneously... Both writing to the same db at the same time?  

    Is your db in the same location as the batch process?  Or are you writing to the server share?  Have you tried making the db local and minimize the network traffic?  Gigabit network card?

    300M db size is not that big, PDMS should handle it from my past experience.

    Did you do DICE on project you are trying to copy?  Merged all sessions?  

    Have you tried binary copy using reconfiguration rather than datal?

    Caps off, scibble finish...
  • Yes we have two macros running at the same time writing to the same db, we have to do this as we need the data a little later in the evening to do further processing. There isnt enough time in the 'window' to do it as subsequent macros. So both are behaving like multiple users, the db is multiwrite.
    We have a brand new db that we are writing to, we essentially create the db just before we load data into it from other databases. We have a file in a high level directory (which is the blank database file) which we then copy to the project sub directory, overwriting the previous database file so we can create all the new items in a 'clean' database.  So we neither need to DICE or merge the new db. We have DICED the empty database, its fine. We have DICED the databases from where we are copying stuff.

    Not sure what the network card is. Although the batch process is running on the server which holds the data, so does it actually use the network?

    We are not using datal, we have a function which creates an object, that object (which holds all the data) is then used as a method to create the site/zone/pipe etc by use of "new
  • DE_ding I have experimented with copying via Reconfigure, the initial results are very appealing. I think I will try and push this as the way forward.

    Thanks for your idea.

    Matt
  • Reconfiguration is by far the easiest and safest methods which therefore makes it the best way - go for it Matt!
  • Just had an update from Aveva, they have raised a "defect' for this as a bug and will be fixing it, no details as yet as to when.

    Matt