What we know and do not about Salesforce Hyperforce

Salesforce made a big announcement in the form of Hyperforce at the recently concluded Dreamforce event. We do not know much about it other than what Salesforce announced on the day.

My discussions with various people in the Salesforce ecosystem turned up little concrete details on what it means for you and me and for the customers we work with.

What we know

  1. Till now Salesforce instances were all delivered from Salesforce’s own data centres. This changed with Australia where Salesforce tied up with AWS and from what I understand, a similar approach has been adopted in India as well. Going forward, customers can also opt for Salesforce instances on Google and Azure clouds.
  2. Since customers can opt for any of these clouds, they can more easily cater to data residency laws that are becoming the norm in most countries.
  3. All existing developments and Appexchange apps will work on these instances as well. Nothing changes.

What we don’t know

  1. The press release talks about adding compute capacity and attaining B2C scale. This wording is a little vague and it is not clear if companies will be able to pay for additional compute to AWS, Google or Azure directly to scale the system. Or will the customers need to opt for a pay-for-what-you-use SKU billed to Salesforce?
  2. It is also not clear if storage can also be extended similarly or will customers have to rely on costly Salesforce storage in the future as well? Without cheaper storage B2C scale is always challenging.
  3. If compute can be added, what happens to governor limits especially around asyc, time and query limits?
  4. What is the $$$ impact to move an existing org to a localised set up? Is there a time window within which it will be free?
  5. What about Heroku?

What do you think? Leave a Reply