This portal is to open public enhancement requests against IBM Power Systems products, including IBM i. To view all of your ideas submitted to IBM, create and manage groups of Ideas, or create an idea explicitly set to be either visible by all (public) or visible only to you and IBM (private), use the IBM Unified Ideas Portal (https://ideas.ibm.com).
We invite you to shape the future of IBM, including product roadmaps, by submitting ideas that matter to you the most. Here's how it works:
Start by searching and reviewing ideas and requests to enhance a product or service. Take a look at ideas others have posted, and add a comment, vote, or subscribe to updates on them if they matter to you. If you can't find what you are looking for,
Post an idea.
Get feedback from the IBM team and other customers to refine your idea.
Follow the idea through the IBM Ideas process.
Welcome to the IBM Ideas Portal (https://www.ibm.com/ideas) - Use this site to find out additional information and details about the IBM Ideas process and statuses.
IBM Unified Ideas Portal (https://ideas.ibm.com) - Use this site to view all of your ideas, create new ideas for any IBM product, or search for ideas across all of IBM.
ideasibm@us.ibm.com - Use this email to suggest enhancements to the Ideas process or request help from IBM for submitting your Ideas.
Thanks for submitting this idea. In general, it is definitely a good thing to offload work from the CPUs where possible and there is a number of things that we do already. Starting with Power7+ we have added separate data encryption engines and data compression engines to offload this from software and allow the CPUs to do other work. Not only does this save CPU compute resources, but the offload engines are much faster. Regarding your specific suggestion to offload the I/O handling and virtualization to separate customized CPUs, there are several considerations, 1) having dedicated CPU modules will be a challenge since the number of cores needed for this is relatively low and the number of cores per socket is significantly increasing. Dedicating an entire socket to I/O and virtualization will be hard to optimize. 2) There a trade-off to be made here in that today’s implementation would allow for a highly affinitized configuration where the I/O and memory can be local to the CPU socket or node to provide a low latency, high bandwidth partition. If the I/O was in a separate node, all of the I/O traffic would need to be transferred across the processor fabric which would be susceptible to bottleneck/contention issues. 3) Lastly, instead of using CPUs for I/O offload, the industry is headed towards DPUs to offload network and storage tasks from the CPU. Where this makes sense, we will look to support these DPUs that are embedded into PCIe cards similar to how we leveraged the SRIOV virtualization on PCIe cards.
Can you please describe "the main frame lesson" and how is this different than using dedicated cores for VIOS?