You said: " Our database server is running Server 2008r2, and SQL 2008 SP2. It has 32g RAM. We finally started looking at how SQL was set up for memory reservation. We saw GE put the max SQL memory at 26000. We set the max memory back to 25000, giving the O/S 7g. This seems to be working well. We also updated SQL to the latest SP3. There are some hotfixes that we have not applied yet.
What is the rule of thumb there? In discussions with support, we touched on it and they didn't really want more memory allocated to the VM for SQL. I could have allocated much more but they were happy with their settings and nothing we saw in perfmon suggested we should change anything."
There is no rule of thumb that I know of. Basically, the idea is to give the server the amount of RAM it needs to take care of business, and allocate the rest to SQL. If the SQL server does just that, then it may only need 4g. As you watch perfmon, you can see if the O/S is struggling. Just reduce the SQL max memory until the box is happy.
GE support can give recommendations, but you know the environment best. Tune the box for best performance based on your environment. We had GE support on our server many times. They saw how it was thrashing around, but never made recommendations beyond throwing more hardware at the problem. An example was our JBOSS server. We built the box based on their recommendations. It was not performing well, so they told us to add two more processors to it. We came to the realization that we knew our systems better than they did, so we began to hone things ourselves.
The result? We saw *better* performance from the JBOSS server with only 2 processors instead of their recommended 4. We added RAM to the JBOSS server, then tweaked the JVM settings to allocate more resources to it.
By the way, just having a larger "pipe" does not necessarily mean better performance. If the network utilization is not high, then more bandwidth will not make an appreciable difference. A gigabit network can flow a huge amount of data. What *does* make a big difference is that the data gets to the client the first time. A struggling server, underpowered client PC's, and poor switches/cabling can cause the data and connections to time out. This requires the data be retransmitted, thus causes a domino effect in the network.
Posted : May 10, 2012 7:11 am