This portal is to open public enhancement requests against IBM Power Systems products, including IBM i. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
The CAAC has reviewed this IBM Idea and recommends that IBM view this as a medium priority Idea that should be addressed.
While this idea discusses BRMS, we would like to see this functionality in the native save commands as well.
Background: The COMMON Americas Advisory Council (CAAC) members have a broad range of experience in working with small and medium-sized IBM i customers. CAAC has a key role in working with IBM i development to help assess the value and impact of individual IBM Ideas on the broader IBM i community and has therefore reviewed your Idea.
For more information about CAAC, see www.common.org/caac
Carmelita Ruvalcaba - CAAC Program Manager
I have identified the same problem, it's not efficient at all due the need to double the Block Storage size of the supporting ASP making it very expensive. This could be prevented using the ability to copy the same volumes to a different COS location instead of duplicate the volumes in the ASP and then moving it to the other location.
However I suggest that IBM could go even further, creating the ability to allow the use of not just one but multiple copy destinations, inside or even outside IBM COS on a different data center.
It seems to me not too difficult to allow the BRMS to instruct the ICC to make multiple copies of the same volumes to different locations instead of allow just one at a time.
It would bring the BRMS/ICC to a world wide backup ability.
Best Regards,
Licinio Seabra (Banco Atlantico Europa)
Good afternoon,
What you suggest is even worse comparing to what we are doing, because that implies that someone everyday went to do those moves, and that kills the automation that BRMS provides, also that way BRMS only knows the last location where the volumes were moved, and we need that BRMS knows that the volumes are in both buckets...
Let me try to explain better...
We are already doing saves with a custom CLLE of all the Libraries that we want to save everyday, journals, IFS, etc...
We do the saves to ASP2.
Every type of save has is own buckets in Madrid and the pair in Frankfurt.
Daily saves goes to daily buckets, monthly saves goes to the monthly buckets, journals goes to the journals buckets, etc.
To have this in Madrid's and Frankfurt's buckets and have it registered in BRMS, we have on the *MED policies the *YES on the "Mark volumes for duplication", and we have the *YES on the "Move marked for duplication" in the *MOV policies and after the saves we do the DUPMEDBRM MOVPCY(frankfurt_bucket).
This is working very well, and BRMS knows where all the volumes are. We already tested, simulating an unavailability in Madrid, and restore a LIB or an Object from Frankfurt and vice-versa.
To keep the volumes quick to download from COS in case of need, in WRKPCYBRM *BKU - 1, we are limiting the volumes to 30GB, of course that makes the volume number higher. Our daily save, takes about 130 volumes.
Most of the saved objects stays in one volume, some objects spans to another volume/s. This strategy is working like a charm for us. Quick to download if needed, redundant in terms of buckets location, and we can use the WRKMEDIBRM to restore from one bucket or the other.
The only handicap of this solution is that we have to double the space needed in ASP 2, because of the DUPMEDBRM, and of course double the space, double the money on PowerVS.
So, it was very nice and useful to have a way to say to BRMS to transfer the volumes to two location instead of one.
For example, in the WRKPCYBRM *MED we already have a Move Policy parameter, you could add i.e a Redundant Move Policy parameter and have BRMS send the volumes to both location, that way there was no need to double the space.
Thank you.
We would like suggest trying the following process to see if it meets your request. Change the cloud move policy of the initial backup to retain the media after it has been transferred to the cloud. This can be done using the WRKPCYBRM *MOV command and using Option 2 to change the Retain media field to keep the original media on the system for a number of days after the transfer. A possible value might be setting this to 5 days. After the initial cloud transfer completes, the volume(s) will still exist on the system and the WRKMEDBRM command with Option 8 can be used to move the volume(s) to the *HOME location. Since the volume(s) were retained, this will not require a transfer and will just update the location to *HOME. Then an Option 8 can be used again to move the volume(s) to the secondary cloud location. This will allow you to use one backup copy to transfer to 2 separate cloud locations.
Please let us know if you have any questions or need further clarification. If this process meets your needs, we can add this documentation on our "How to" cloud information and close this issue.