Skip to Main Content
IBM Power Ideas Portal


This portal is to open public enhancement requests against IBM Power Systems products, including IBM i. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).


Shape the future of IBM!

We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:

Search existing ideas

Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,

Post your ideas
  1. Post an idea.

  2. Get feedback from the IBM team and other customers to refine your idea.

  3. Follow the idea through the IBM Ideas process.


Specific links you will want to bookmark for future use

Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.

IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.

ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.

Status Not under consideration
Workspace IBM i
Categories Core OS
Created by Guest
Created on Jan 18, 2021

Add timestamp to peak temporary storage

The system tracks the peak temporary storage for each temporary storage bucket. The addition of a timestamp for when the peak was hit would be very helpful in understanding and debugging temporary storage consumption issues.


Use Case:

I have a client with temporary storage problems. We can see the peaks, but do not know when they occurred. Knowing the timeframe in which the peak occurred would allow some correlation with Collection Services data or the plan cache to get a better idea of what was happening.


Idea priority Medium
  • Guest
    Reply
    |
    Mar 12, 2021

    A single timestamp for peak storage usage is misleading since the storage usage goes up and down and the last function to create even a tiny amount of storage can cause it to hit a new peak, even though it may have nothing to do with what caused the storage to increase unexpectedly.
    Collection services provides a more complete picture.

  • Guest
    Reply
    |
    Feb 17, 2021

    The CEAC has reviewed this requirement and recommends that IBM view this as a MEDIUM priority requirement that should be addressed.

    Background: The COMMON Europe Advisory Council (CEAC) members have a broad range of experience in working with small and medium-sized IBM i customers. CEAC has a crucial role in working with IBM i development to help assess the value and impact of individual RFEs on the broader IBM i community and has therefore reviewed your RFE.

    To find out how CEAC help to shape the future of IBM i, see CEAC @ ibm.biz/BdYSYj and the article "The Five Hottest IBM i RFEs Of The Quarter" at ibm.biz/BdYSZT

    Therese Eaton – CEAC Program Manager, IBM

  • Guest
    Reply
    |
    Feb 9, 2021

    I was thinking, for ease of use, that the timestamp should be available on the Navigator for i "Temporary Storage Details" page, as well as via the QSYS2.SYSTMPSTG service.

    However, the suggestion to use Collection Services data is acceptable to me since I understand that data. For users that are not knowledgeable about Collection Services, the suggestion above would be preferred.

  • Guest
    Reply
    |
    Feb 8, 2021

    The peak temporary storage field is kept in the QAPMJOBMI file in Collection Services. So you should be able to query the JBPEAKTMP for the job and look for all intervals over a value just below the peak to find the date and time it was near the peak.

    A job could go through many peaks and valleys during its lifespan, so keeping a single timestamp might not show the entire picture.

    It's possible that something drove the storage up days before the peak was hit, but the most recent program that allocated some tiny amount of storage caused the cumulative amount of storage to exceed the peak storage. A single peak timestamp would not be that useful for this type of use case, and may even be misleading.

    Tracking this at the SLIC level seems like a pretty big requirement for something that could be determined through data already captured by Collection Services. Is there something more needed from the performance tools here?