It seems like the same question comes up with just about every customer or prospect: "Will your product introduce additional latency into our cloud app environment?" It's a legitimate question since our hybrid CASB architecture includes proxies for real-time policy enforcement. If you guessed that the Bitglass proxy would be slower, you would be wrong!
By definition, a proxy includes an additional network hop, so how is that possible? First, let's look at some data.
The chart shows network round-trip times in two different scenarios. The red line shows latency from a client endpoint direct to Office 365, while the blue line shows latency from a client endpoint, through Bitglass, and then on to Office 365. Our automated QA systems conduct many such tests daily, in a variety of scenarios, always with similar results - we're not always faster, but on average, the Bitglass proxy comes out ahead!
So how is this possible? This is the result of our architectural decision to use AWS, a globally distributed, scalable cloud infrastructure for the Bitglass service. Since many cloud apps like Office 365 pin users to pre-defined data centers upon tenant rollout, their traffic is subject to a performance tax as they traverse the Internet to their home cloud.
Unlike the proprietary data centers chosen by other CASBs, Bitglass is deployed across many data centers, ensuring low latency access to an ultra high-speed backbone for users everywhere. Additionally, auto-scale flexibility engineered into our products ensures that as soon as any user starts to experience increased latency due to load on our service, the entire system grows automatically to meet that demand.