We upgraded our systems to a virtual environment about a year and a half ago. When we migrated our Centricity EMR database server into the environment, we found a huge decrease in performance. We monkeyed around to attempt to increase performance, however with little effect. The server eventually had to be removed from the environment.
After extensive research we believe to have isolated the problem to the shared storage (though not 100% sure this was the whole problem). It is on a netapp fas2020. it is connected to the storage network with a singe 1Gb connection and the storage is shared via nfs. Three virtual hosts are connected to this system. Not the ideal setup for a database on a virtual server.
the netapp fas2020 can be recofigured with fc or iscsi however some research has revealed that the fas2020 has some limits to its writes due to its raid setup and other complexities. So we have decided to pursue a shared SAS solution with much greater throughtput and performance.
We are still a little wary however. I curious as to the hardware setups of some successful centricity emr or cps (we will be migrating to cps soon) virtual environments.
Is there anybody out there willing to share some info about your virtual environment and its performance?
specific questions:
1. you storage configuration connectivity: storage unit, protocal (fc, iscsi, nfs, shared sas) how well does it perform? any performance figures?
2. lan set up. due to some licensing restrictions our cps implementation was limited to using 1gb lan connection. could not team on the virtual hosts.
It would really be great to hear confirmation that there are some successful virtualized centricity servers out there.
would appreciate any input.
thanks,
tom
[email protected] said:
We upgraded our systems to a virtual environment about a year and a half ago. When we migrated our Centricity EMR database server into the environment, we found a huge decrease in performance. We monkeyed around to attempt to increase performance, however with little effect. The server eventually had to be removed from the environment.
After extensive research we believe to have isolated the problem to the shared storage (though not 100% sure this was the whole problem). It is on a netapp fas2020. it is connected to the storage network with a singe 1Gb connection and the storage is shared via nfs. Three virtual hosts are connected to this system. Not the ideal setup for a database on a virtual server.
the netapp fas2020 can be recofigured with fc or iscsi however some research has revealed that the fas2020 has some limits to its writes due to its raid setup and other complexities. So we have decided to pursue a shared SAS solution with much greater throughtput and performance.
We are still a little wary however. I curious as to the hardware setups of some successful centricity emr or cps (we will be migrating to cps soon) virtual environments.
Is there anybody out there willing to share some info about your virtual environment and its performance?
specific questions:
1. you storage configuration connectivity: storage unit, protocal (fc, iscsi, nfs, shared sas) how well does it perform? any performance figures?
2. lan set up. due to some licensing restrictions our cps implementation was limited to using 1gb lan connection. could not team on the virtual hosts.
It would really be great to hear confirmation that there are some successful virtualized centricity servers out there.
would appreciate any input.
thanks,
tom
storage san, connected to the VM server using Fiber channel, Space for database allocated as iSCSI not NFS. NFS performance is pathetic for databases with high demand of input/output
and by the way netapps appliance are good for backing up, or sharing user files from it, but not databases. Netapps appliance are not known for High performance sharing point unless you are taking about the enterprises level on the 80k+ price range.
in my case with use this guy bigger brother
http://h10010.www1.hp.com/wwpc.....#038;cc=us
connected to a fiber switch, which connect to the VM server where emr is. and also is connected to other vm server running our CPS and other services. With no performance hic up
We use two Equallogic PS6000 SAN's in a single group. This gives 32 spindles for better performance. The SAN is set up with iSCSI through dedicated switches that connect to our VMWare hosts. We have four hosts running ESXi 5. These hosts are dedicated to run the Centricity servers (we have others running non-CPS servers). We installed the Dell MEM kit to allow for multipath I/O data streams between the host servers and the datastore volumes. Our network is gigabit, with a few PC's still running 100mb. We have around 500 PC's/servers on the network, with a max 300 concurrent CPS users.
We started with CPS 9.5 and had lots of issues with frequent disconnects, application error dialog boxes, and very slow performance. We went to v10 hoping to get some relief. Nothing doing. We have gone from top to bottom on our network, and are working to change some things, but none of this would account for problems we've seen with everyone in the company having ongoing issues.
Our database server is running Server 2008r2, and SQL 2008 SP2. It has 32g RAM. We finally started looking at how SQL was set up for memory reservation. We saw GE put the max SQL memory at 26000. We set the max memory back to 25000, giving the O/S 7g. This seems to be working well. We also updated SQL to the latest SP3. There are some hotfixes that we have not applied yet.
Our JBOSS (application) server has 32g of RAM. We also modified some of the settings on the JBOSS server to increase the available memory and resources.
The result? Vastly reduced disconnects and pop-up errors. Things are running faster than before too.
aracheb said:
storage san, connected to the VM server using Fiber channel, Space for database allocated as iSCSI not NFS. NFS performance is pathetic for databases with high demand of input/output
and by the way netapps appliance are good for backing up, or sharing user files from it, but not databases. Netapps appliance are not known for High performance sharing point unless you are taking about the enterprises level on the 80k+ price range.
in my case with use this guy bigger brother
http://h10010.www1.hp.com/wwpc.....#038;cc=us
connected to a fiber switch, which connect to the VM server where emr is. and also is connected to other vm server running our CPS and other services. With no performance hic up
Thanks for the response. Just the kind of reassurance i am looking for. We are looking at the same p2000 g3 unit but as a shared SAS rather than FC. Performance should be similar. Thanks again.
gmghelpdesk said:
We use two Equallogic PS6000 SAN's in a single group. This gives 32 spindles for better performance. The SAN is set up with iSCSI through dedicated switches that connect to our VMWare hosts. We have four hosts running ESXi 5. These hosts are dedicated to run the Centricity servers (we have others running non-CPS servers). We installed the Dell MEM kit to allow for multipath I/O data streams between the host servers and the datastore volumes. Our network is gigabit, with a few PC's still running 100mb. We have around 500 PC's/servers on the network, with a max 300 concurrent CPS users.
We started with CPS 9.5 and had lots of issues with frequent disconnects, application error dialog boxes, and very slow performance. We went to v10 hoping to get some relief. Nothing doing. We have gone from top to bottom on our network, and are working to change some things, but none of this would account for problems we've seen with everyone in the company having ongoing issues.
Our database server is running Server 2008r2, and SQL 2008 SP2. It has 32g RAM. We finally started looking at how SQL was set up for memory reservation. We saw GE put the max SQL memory at 26000. We set the max memory back to 25000, giving the O/S 7g. This seems to be working well. We also updated SQL to the latest SP3. There are some hotfixes that we have not applied yet.
Our JBOSS (application) server has 32g of RAM. We also modified some of the settings on the JBOSS server to increase the available memory and resources.
The result? Vastly reduced disconnects and pop-up errors. Things are running faster than before too.
Thank you for the response.
We have 70 concurrent users and i was planning on 10 - 12 spindles on a raid 10 with 10000 rpm drives.
I am curious how you have your Vservers connected to your LAN. Are they connected though a Vswitch using a sole 1Gb adapter or do you have you teamed multiple adapters to your LAN.
We too have noticed performance issues with our upgrade to CPS 10 and we are only using the PM side of things. We'll try some of your solutions. Glad you shared. Is there a possibility that the client side may be heavy. I am unfamiliar exactly how the web interface interacts. I would intuitively think it would much lighter on the client side but I have been questioning that assumption since our upgrade.
You said: "am curious how you have your Vservers connected to your LAN. Are they connected though a Vswitch using a sole 1Gb adapter or do you have you teamed multiple adapters to your LAN.
We too have noticed performance issues with our upgrade to CPS 10 and we are only using the PM side of things. We'll try some of your solutions. Glad you shared. Is there a possibility that the client side may be heavy. I am unfamiliar exactly how the web interface interacts. I would intuitively think it would much lighter on the client side but I have been questioning that assumption since our upgrade."
We have two gb NIC's from each host set to aggregate all LAN traffic to the switch. The associated production switch ports are set as LAGs. This not only provides up to 2gb throughput, but is also provides redundancy. Both host NIC's are on the same vSwitch.
I should have noted earlier that the connections between the hosts and SAN are set to 9000 MTU. We also set two VMotion NIC's to 9000 MTU to allow for fast moving of VM's between hosts. After doing so, we used Putty to connect to one host to look at the network performance. Using the ESXTOP, then "n" command, we saw solid 95% throughput of both 1g NIC's as a VM was moved from one host to another.
Whenever users connect to CPS, they connect to the web interface. This starts the Java session for each user. The JBOSS server then establishes a connection to the SQL database server. You will be miles ahead if you invest in a fast network and good hardware. This means from the servers down to the end-user PC's.
CPS will run much better if you can give each client lots of resources. We have users running older PC's with only a gig of RAM with XP. It works, but is dog slow. We have other users with newer boxes and 8g of RAM with 64 bit O/S's. Though CPS is still not lightning fast, the newer system users have better performance moving from screen to screen. Some things never change. Newer programs are designed to use more resources. CPS is no different.
[email protected] said:
We upgraded our systems to a virtual environment about a year and a half ago. When we migrated our Centricity EMR database server into the environment, we found a huge decrease in performance. We monkeyed around to attempt to increase performance, however with little effect. The server eventually had to be removed from the environment.
After extensive research we believe to have isolated the problem to the shared storage (though not 100% sure this was the whole problem). It is on a netapp fas2020. it is connected to the storage network with a singe 1Gb connection and the storage is shared via nfs. Three virtual hosts are connected to this system. Not the ideal setup for a database on a virtual server.
the netapp fas2020 can be recofigured with fc or iscsi however some research has revealed that the fas2020 has some limits to its writes due to its raid setup and other complexities. So we have decided to pursue a shared SAS solution with much greater throughtput and performance.
We are still a little wary however. I curious as to the hardware setups of some successful centricity emr or cps (we will be migrating to cps soon) virtual environments.
Is there anybody out there willing to share some info about your virtual environment and its performance?
specific questions:
1. you storage configuration connectivity: storage unit, protocal (fc, iscsi, nfs, shared sas) how well does it perform? any performance figures?
2. lan set up. due to some licensing restrictions our cps implementation was limited to using 1gb lan connection. could not team on the virtual hosts.
It would really be great to hear confirmation that there are some successful virtualized centricity servers out there.
would appreciate any input.
thanks,
tom
We have an EMC Clariion back end with FC connected HBAs (4gbit) to our VMWare Cluster. Performance was not bad but due a bad CPS 10 upgrade and subsequent troubleshooting CPS 10 problems we went to a physical Jboss server and it appeared to clear up the problems for about a month. However, we started to have degraded performance again with CPS slowing down to a stop multiple times per day.
Our LAN infrastructure is all Cisco, mostly 3560G switches which also do our routing and VLAN configurations. We have 4GB Etherchannel between each switch configured with Spanning Tree Protocol so we can lose a switch and it won't affect the other switches. All switches have RPS, UPS, and Generator power backup and should never go down. Our switch CPU utilization seemed to increase with the CPS 10 upgrade but I think that is due to the client/server nature of the newer clients.
I would look to increase your throughput beyond 1GB in your configuration. As you increase the user base, there is a lot more traffic to the jboss server. When you open a patient, it is like opening 6 web pages at once with jboss as the web server
Mike Zavolas
Tallahassee Neurological Clinic
Thanks Mike,
i think a 48port 3560G is looking like a good investment. Worried about our EMR merge into CPS 10.
You may want to check out some of the modifications noted by gmghelpdesk above.
tom
gmghelpdesk said:
Our database server is running Server 2008r2, and SQL 2008 SP2. It has 32g RAM. We finally started looking at how SQL was set up for memory reservation. We saw GE put the max SQL memory at 26000. We set the max memory back to 25000, giving the O/S 7g. This seems to be working well. We also updated SQL to the latest SP3. There are some hotfixes that we have not applied yet.
What is the rule of thumb there? In discussions with support, we touched on it and they didn't really want more memory allocated to the VM for SQL. I could have allocated much more but they were happy with their settings and nothing we saw in perfmon suggested we should change anything.
Just a curiosity on my part...
Our JBOSS (application) server has 32g of RAM. We also modified some of the settings on the JBOSS server to increase the available memory and resources.
For us, I was going overboard here compared to what GE recommended and what they ended up scaling it back to. I had 24GB allocated tot eh VM and they actually had me drop it way down to 8GB and that is what we bought when we converted back to physical hardware.
The result? Vastly reduced disconnects and pop-up errors. Things are running faster than before too.
More memory did not appear to help us on jboss. Weird as it goes against everything I know about java
Mike Zavolas
Tallahassee Neurological Clinic
You said: "More memory did not appear to help us on jboss. Weird as it goes against everything I know about java"
Did you add memory to the JBOSS server itself, or to the various JVM settings?
You said: " Our database server is running Server 2008r2, and SQL 2008 SP2. It has 32g RAM. We finally started looking at how SQL was set up for memory reservation. We saw GE put the max SQL memory at 26000. We set the max memory back to 25000, giving the O/S 7g. This seems to be working well. We also updated SQL to the latest SP3. There are some hotfixes that we have not applied yet.
What is the rule of thumb there? In discussions with support, we touched on it and they didn't really want more memory allocated to the VM for SQL. I could have allocated much more but they were happy with their settings and nothing we saw in perfmon suggested we should change anything."
There is no rule of thumb that I know of. Basically, the idea is to give the server the amount of RAM it needs to take care of business, and allocate the rest to SQL. If the SQL server does just that, then it may only need 4g. As you watch perfmon, you can see if the O/S is struggling. Just reduce the SQL max memory until the box is happy.
GE support can give recommendations, but you know the environment best. Tune the box for best performance based on your environment. We had GE support on our server many times. They saw how it was thrashing around, but never made recommendations beyond throwing more hardware at the problem. An example was our JBOSS server. We built the box based on their recommendations. It was not performing well, so they told us to add two more processors to it. We came to the realization that we knew our systems better than they did, so we began to hone things ourselves.
The result? We saw *better* performance from the JBOSS server with only 2 processors instead of their recommended 4. We added RAM to the JBOSS server, then tweaked the JVM settings to allocate more resources to it.
By the way, just having a larger "pipe" does not necessarily mean better performance. If the network utilization is not high, then more bandwidth will not make an appreciable difference. A gigabit network can flow a huge amount of data. What *does* make a big difference is that the data gets to the client the first time. A struggling server, underpowered client PC's, and poor switches/cabling can cause the data and connections to time out. This requires the data be retransmitted, thus causes a domino effect in the network.
Another question.
As mentioned above, our plan was 10-12 sff 10000rpm drives in raid 10. Overlooking the literature Centricity recommends 11 sff raid 5. I was always under the impression that raid 10 was a better performer for database transactions. And after having a couple of experiences with 2 hds failing at nearly the same time, I do not feel comfortable with the raid 5. Would be more comfortable with raid 6, however, i understand that raid 6 is even a poorer performer than raid 5.
Any comments on any of this?
[email protected] said:
Another question.
As mentioned above, our plan was 10-12 sff 10000rpm drives in raid 10. Overlooking the literature Centricity recommends 11 sff raid 5. I was always under the impression that raid 10 was a better performer for database transactions. And after having a couple of experiences with 2 hds failing at nearly the same time, I do not feel comfortable with the raid 5. Would be more comfortable with raid 6, however, i understand that raid 6 is even a poorer performer than raid 5.
Any comments on any of this?
I configure RAID 10 on DB servers. I have heard about the RAID 6 write penalty due to extra parity calculations but I figured someday the faster CPUs on controllers would negate that.
I configure RAID 10 on DB servers. I have heard about the RAID 6 write penalty due to extra parity calculations but I figured someday the faster CPUs on controllers would negate that.
If you don't mind me asking, how many spindles and how fast? also how many concurrent users on your system.
thanks,
tom