Salesforce made a big announcement in the form of Hyperforce at the recently concluded Dreamforce event. We do not know much about it other than what Salesforce announced on the day.
My discussions with various people in the Salesforce ecosystem turned up little concrete details on what it means for you and me and for the customers we work with.
What we know
- Till now Salesforce instances were all delivered from Salesforce’s own data centres. This changed with Australia where Salesforce tied up with AWS and from what I understand, a similar approach has been adopted in India as well. Going forward, customers can also opt for Salesforce instances on Google and Azure clouds.
- Since customers can opt for any of these clouds, they can more easily cater to data residency laws that are becoming the norm in most countries.
- All existing developments and Appexchange apps will work on these instances as well. Nothing changes.
What we don’t know
- The press release talks about adding compute capacity and attaining B2C scale. This wording is a little vague and it is not clear if companies will be able to pay for additional compute to AWS, Google or Azure directly to scale the system. Or will the customers need to opt for a pay-for-what-you-use SKU billed to Salesforce?
- It is also not clear if storage can also be extended similarly or will customers have to rely on costly Salesforce storage in the future as well? Without cheaper storage B2C scale is always challenging.
- If compute can be added, what happens to governor limits especially around asyc, time and query limits?
- What is the $$$ impact to move an existing org to a localised set up? Is there a time window within which it will be free?
- What about Heroku?
What do you think? Leave a Reply