What’s the point of offering world-class self-service business intelligence reporting software if in the time it takes your end users to run a report, they could go to the bathroom, wander by the coffee machine, wait for the coffee to brew while griping about Bob (who drained the coffee without making more, again), water their office plants, dust off their monitors, and pin the latest piece of kid art to their cubicle walls all before the report finishes executing?

Well, the point will depend primarily on whether those end users feel the wait is reasonable. If Cheryl knows she’s executing a report that will return hundreds of thousands of records, she may very well expect it to take twenty minutes and will budget her time accordingly. But if she has to spend twenty seconds staring at a loading gif waiting for a six-record report to load, her next stop will probably be your support desk.

In our Performance & Scaling series, we’ll explain how to optimize Exago’s technology, manage users’ expectations, and generally minimize the number of disgruntled emails you get from the Cheryls of the world.

First stop: network and hardware. Before we can even begin thinking about things like database optimization or application configuration, we need to assess the performance of the physical technology — the fibers, cables, and chips — making the whole system work in the first place.

 

Network

A network is a collection of connected computers, and an Exago network can include dozens of machines and connections.

 

 

For the purposes of this discussion, though, we’ll be referring to a basic Exago setup illustrated below. In general, the closer together the machines are, in terms of physical distance, the better the network performance will be.

 

If a user is in the same city as the application server, and the application server is in the same room as the database server, data packets exchanged between machines will never have to travel very far. The connection will seem fast, regardless of whether the networking cables are wire or fiber optic.

Network optimization starts getting tricky as distances increase. Let’s say you’re a SaaS company based in the Northeast United States that develops a solution for healthcare professionals providing in-home medical services. The majority of your customers are also located in the Northeast region, so you host your application and Exago BI locally to keep the client-to-application connection fast. Unfortunately, the only on-budget database servers you were able to find are located in Arizona.

This introduces latency between the application server and the database server. You can ping your database from the application server to see how long it takes a small packet of information to travel between machines, but realize that it will take many, many packets to transfer a whole report’s-worth of data and that these packets will be considerably larger than your ping’s test packet. Database queries are by far the largest element of an execution, with the most potential for inefficiency and slowdown. To assess your network’s speeds, follow our best practices for measuring latency and try to shorten your times by bringing your machines in closer proximity to one another.

In some cases, SaaS companies don’t host their users’ data but instead connect directly to each client’s proprietary database. Without multiple instances of Exago partnered with each database, this scenario might look something like the below. Since the application server is located in the Washington D.C. area, connections to New York will be much speedier than connections to Washington State or California. Both the Los Angeles database and the Los Angeles client nodes have to communicate with a server on the other side of the continent.

In such scenarios, it is advisable to maintain multiple application servers with matching configuration files, effectively creating a network for each region. This keeps the distances between machines low and the performance ceiling high.

 

But of course, network cables are not the only hardware performance variables. The servers themselves will need enough processing power to handle network traffic and scale over time.

 

Servers

Server performance guidelines apply to the database server, the application server, and any other servers you may have as part of your system. In general, the goal is for each machine to have sufficient processing power and memory to handle the network’s requests.

Starting with multi-core CPUs and plenty of RAM is good practice, and the best way to estimate how much you will need is to run stress tests in a basic environment, monitor the machine’s processes, and use that data to make predictions about how much traffic the server will be able to handle in production. Of course, it is important to continue this monitoring practice after deployment, as your estimates are just that: estimates. Store your monitoring data as you acquire more users and accumulate more data so that you can identify when a machine will become overextended before it happens.

To take advantage of a multi-core CPU on your application server, be sure to set your Internet Information Services (IIS) app pool settings accordingly. If you set the Maximum Worker Process to a value greater than 1, you will enable the pool to run multiple processes simultaneously, taking fuller advantage of the machine’s multiple cores. Adjusting this setting will increase your machine’s demand for RAM, however, as each process will need access to its own “copy” of the application in memory. It will also require that you set up a state service responsible for channeling calls to the correct worker process.

 

 

Once you have primed your network and hardware for high-performance service, you can proceed with confidence to optimizing your data and software.

As always, for more information on this and other topics related to performance optimization and scaling, visit the Exago Knowledge Base.