Upgrading QNAP NAS (2x) - tough choices

Started 3 months ago | Discussions
PHXAZCRAIG
PHXAZCRAIG Forum Pro • Posts: 17,406
Upgrading QNAP NAS (2x) - tough choices

I want more speed, specifically SSD tiering, in my two QNAP's.     But where to begin, where will the biggest benefit come?

I have two QNAP's, one used for iSCSI to three VMWare 'test' servers, which also host some important virtual servers, including my website poor as it is.   The virtual servers are served up via iSCSI 1Gbpe links, and accessed via separate 1Gbpe links.

The other (not iSCSI target) does a lot of everything else, but the most important function is backing up my photo and video library (about 4TB currently).  I have 27TB of usable space on this QNAP, so it does things like streaming video (occasionally).

If I am to upgrade, I'm just starting to calculate where to start.   If I do the iSCSI infrastruction, it will be more convenient to start and stop my virtual servers - but I only need to do that when firmware-updating the iSCSI QNAP target.   Unfortunately they seem to have firmware updates almost monthly and sometimes more often.  Very disruptive to have to restart about 15 servers.   But other than that, I don't see iSCSI speedups having any real day-to-day benefit to me.

If I update my LAN side, then I can access those virtual servers (and QNAP NAS) at higher speeds whenever I need to access something, and backups should be much faster too.   The two QNAPs are the only devices in my network not running off SSD, but of course that means all my virtual servers aren't either.

The cost to upgrade my iSCSI infrastructure - and here I have cable plant limitation because I only have Cat5e in the walls - would be a 5/8-port switch, three quad-port nics (2.5, 5 or 10Gbpe) for the ESXi servers,  And a new QNAP with a fast NIC,

To feel it on my end PC, I need a 24-port switch, 3 fast NIC's (three PC's in use), 3 fast quad-port NICs for the ESXi servers, and a new QNAP.

To do both, I'll need two switches and two QNAPs.

I feel like the starting point here is one of the QNAPs, since I would gain a fair amount from SSD tiering regardless of network speed.    Doing both of those is a bit off-putting right now due to cost of both QNAPs ($800 or so for a TS-673), upgrading the RAM in them to max, and adding in one or two SSD's for caching.

Where would you begin?

-- hide signature --

Phoenix Arizona Craig
www.cjcphoto.net
"I miss the days when I was nostalgic."

 PHXAZCRAIG's gear list:PHXAZCRAIG's gear list
Nikon D80 Nikon D200 Nikon D300 Nikon D700 Nikon 1 V1 +37 more
DerKeyser Contributing Member • Posts: 736
Re: Upgrading QNAP NAS (2x) - tough choices

Remember that for iSCSI you don’t need faster ethernet links. iSCSI can use several gigabit links in parallel if you set it up correctly. On ESXi you need to make more than one VMKernel adapter that you do iSCSI port binding on. On your QNAP you need to present the same LUN on several NIC targets. Then you should enable Round Robin I/O i ESXi, and you will have 2Gbit, 3Gbit or however many adapters you enable.
I would setup two adapters on easu ESXi server, and three or four on the QNAP. That way each server can do 2Gbit, and the QNAP can do 3 or 4Gbit combined across the ESXi hosts.

Obviously that also requires a SSD layer in the QNAP if there is anything remotely random in the IO pattern (And there is since you have 15 VMs)

The most “felt” gain will definitely come from adding a large SSD tier to your iSCSI LUNS. The boottime and “interactiveness” of your VM’s will become MUCH better.

-- hide signature --

Happy Nikon Shooter
See my articles and galleries at: https://wolffmadsen.dk

Mickey67 Regular Member • Posts: 474
Re: Upgrading QNAP NAS (2x) - tough choices
1

To comment on what i know, check performance monitor on qnap, unless you are running significant apps on the unit is unlikely to me memory constrained. I have added memory in the past and it seems to make no difference to io performance.

regarding teaming, my understanding is that it makes bigger difference when multiple processes are accessing the nas simultaneously. If that is not the case then i would upgrade the lan speed on a single path. Sounds like you are relatively lightly loaded.

We have 10gbit on our vmwAre, i tried multipath on hyperv and it didnt seem to make any difference for our workload. My expertise level on this is not high however.

 Mickey67's gear list:Mickey67's gear list
Leica M Typ 240 Canon 6D Mark II Nikon Z6 II Leica Summicron-M 35mm f/2 ASPH Leica Summicron-M 50mm f/2 +1 more
sshapiro Contributing Member • Posts: 599
Re: Upgrading QNAP NAS (2x) - tough choices

You provided me with some good feedback recently, so I'm sure you know a lot more than me, but I will give you a few suggestions. I don't know if any of the following information is helpful, but you might find an idea that could mitigate at least some of your challenges.

Could you use local storage within each VM server and avoid the entire iSCSI speed problems? Are all of your devices located in different rooms? If you can get your devices in the same location, you could use higher-speed twisted pair patch cables or maybe DAC cables to connect devices directly instead of via a switch.

I just finished building an OpenMediaVault NAS to migrate away from a QNAP. I installed an Intel X520-DA2 10GbE dual-port NIC in the OMV NAS and am using one port to connect into a 10GbE switch port. I have a second X520-DA2 on order that I will install in another computer I am going to try setting up with ESXI or Proxmox. One port of the NIC in that computer will go into a 10GbE port on the switch and the other will port will direct connect to the second port on the NAS, via DAC cable, to serve as a dedicated link for iSCSI. I'm not sure I will stick with the ESXI or Proxmox solutions, but I want to try them and also evaluate iSCSI.

My two desktops (one Windows 10, the other Linux) are connected to 10GbE ports on the switch but only run at 2.5GbE. I have mounted SAMBA shares from the NAS to each desktop and am able to write data to the NAS at up to 290MB/second and read data at about 200MB/second using SSDs on both ends. I set up a vlan on the switch for the 10GbE traffic and each system also has a 1GbE connection via NICs integrated on the motherboards. My hardware is all in the same room, but each system is able to access the rest of the computers in my house, and the Internet, via the 1Gbe connection from the Aruba switch.

The 2.5GbE cards cost me about $20 each and the Intel X520-DA2 cards were under $100 each from eBay. The switch, an Aruba S2500-24T, was under $100 from Amazon. I think QNAP products are great, but being able to select my own hardware and NAS software gave me a lot more flexibility and performance.

PHXAZCRAIG
OP PHXAZCRAIG Forum Pro • Posts: 17,406
Re: Upgrading QNAP NAS (2x) - tough choices

DerKeyser wrote:

Remember that for iSCSI you don’t need faster ethernet links. iSCSI can use several gigabit links in parallel if you set it up correctly.

Yes - but it does require more than one cable.  My servers are spread around the house in closets, and there are only two Cat5 outlets in each room.

On ESXi you need to make more than one VMKernel adapter that you do iSCSI port binding on. On your QNAP you need to present the same LUN on several NIC targets. Then you should enable Round Robin I/O i ESXi, and you will have 2Gbit, 3Gbit or however many adapters you enable.

And there I would need more than one (iSCSI) NIC on the QNAP.  They only have two.

I would setup two adapters on easu ESXi server, and three or four on the QNAP. That way each server can do 2Gbit, and the QNAP can do 3 or 4Gbit combined across the ESXi hosts.

Obviously that also requires a SSD layer in the QNAP if there is anything remotely random in the IO pattern (And there is since you have 15 VMs)

The most “felt” gain will definitely come from adding a large SSD tier to your iSCSI LUNS. The boottime and “interactiveness” of your VM’s will become MUCH better.

Well, for the most part my servers are all test servers, and some are getting quite old indeed.  I still have three NetWare servers, two of them clustered.   The only one that much matters is my Opensuse server that hosts my web site.

Anyway, to do bonding or multipath, which I've done before with iSCSI, I'd need more cabling than I have.  If I start looking at replacing the cable plant, things are going to an extreme here.   I'd have to start by upgrading at least the iSCSI QNAP, and right away I'm looking at over $1000, perhaps close to $2000 or more with SSD's and faster NIC's.

-- hide signature --

Phoenix Arizona Craig
www.cjcphoto.net
"I miss the days when I was nostalgic."

 PHXAZCRAIG's gear list:PHXAZCRAIG's gear list
Nikon D80 Nikon D200 Nikon D300 Nikon D700 Nikon 1 V1 +37 more
DerKeyser Contributing Member • Posts: 736
Re: Upgrading QNAP NAS (2x) - tough choices

PHXAZCRAIG wrote:

DerKeyser wrote:

Remember that for iSCSI you don’t need faster ethernet links. iSCSI can use several gigabit links in parallel if you set it up correctly.

Yes - but it does require more than one cable. My servers are spread around the house in closets, and there are only two Cat5 outlets in each room.

On ESXi you need to make more than one VMKernel adapter that you do iSCSI port binding on. On your QNAP you need to present the same LUN on several NIC targets. Then you should enable Round Robin I/O i ESXi, and you will have 2Gbit, 3Gbit or however many adapters you enable.

And there I would need more than one (iSCSI) NIC on the QNAP. They only have two.

I would setup two adapters on easu ESXi server, and three or four on the QNAP. That way each server can do 2Gbit, and the QNAP can do 3 or 4Gbit combined across the ESXi hosts.

Obviously that also requires a SSD layer in the QNAP if there is anything remotely random in the IO pattern (And there is since you have 15 VMs)

The most “felt” gain will definitely come from adding a large SSD tier to your iSCSI LUNS. The boottime and “interactiveness” of your VM’s will become MUCH better.

Well, for the most part my servers are all test servers, and some are getting quite old indeed. I still have three NetWare servers, two of them clustered. The only one that much matters is my Opensuse server that hosts my web site.

Anyway, to do bonding or multipath, which I've done before with iSCSI, I'd need more cabling than I have. If I start looking at replacing the cable plant, things are going to an extreme here. I'd have to start by upgrading at least the iSCSI QNAP, and right away I'm looking at over $1000, perhaps close to $2000 or more with SSD's and faster NIC's.

Right... I would suggest you shuffle the deck and start over. The power savings, performance and flexibility given by throwing it all away and replace it with just one new unit is going to make a huge difference. MUCH less time spent mending problems in such a setup.

-- hide signature --

Happy Nikon Shooter
See my articles and galleries at: https://wolffmadsen.dk

PHXAZCRAIG
OP PHXAZCRAIG Forum Pro • Posts: 17,406
Re: Upgrading QNAP NAS (2x) - tough choices

I used to have one big unit - the 27TB one.    However, I had always maintained a separate iSCSI NAS and was reluctant to switch over.  (And I was right to be reluctant.  Unlike FreeNAS 9.x, I've had numerous firmware updates on the QNAP that require me to reboot my iSCSI target now, and that's certainly inconvenient).

I got two QNAPs a bit by accident.  I liked the first one, and I had three spare 2TB drives...  Bought a 4th and had enough to fill a 4-bay, and I just got an identical one to the one I had.  (Kind of a hot spare - could move the drives from one box to the other in an emergency).   I find it useful to keep the iSCSI stuff dedicated to just one of the QNAPs - for 'production use'.  That is, my vmware guest files are kept there.   That QNAP went through some disk changes and now has four 4TB drives in RAID5 array.  5TB is dedicated to iSCSI.   The rest of the space is a backup of the photos that are on the other QNAP.

But I also set up a 5TB iSCSI target on the 'big' (27TB) QNAP - after all, 27TB!    I set up a datastore and connected each of my ESXi servers to it as a second iSCSI disk.   Then I set up a Veeam replica to back up each of my guests to replicas, and I store all the replicas on the second (big) QNAP.   Thus, if the iSCSI QNAP goes belly up, I can simply launch all the replicas from the second iSCSI target.    So I have uses for two of them - perhaps even more if they were faster.

I've been looking at 6-bay QNAPs, with the idea of swapping disks in, then filling the remaining bays with SSD cache disks.   The ones I'm looking at also have two pcie slots for possible 10GB, but I'm looking more for 2.5GB at this point.  Easier to achieve.   Problem is, the 6-bays I want are about $1300 empty, and I'd need at least some SSD's to add.

-- hide signature --

Phoenix Arizona Craig
www.cjcphoto.net
"I miss the days when I was nostalgic."

 PHXAZCRAIG's gear list:PHXAZCRAIG's gear list
Nikon D80 Nikon D200 Nikon D300 Nikon D700 Nikon 1 V1 +37 more
calson Forum Pro • Posts: 10,521
Re: Upgrading QNAP NAS (2x) - tough choices

I use the 951x which has provision for adding SSD cache drives but as a single user the gain it trivial as I do not access the same files again and again. This model does have a 10GB Ethernet port and that is what I attach directly to my workstation while the 1GB port on the QNAP is hooked into a Cisco switch that has 10 ports.

Important to have a good switch as many will support 1GB on one or two ports but not on the rest of their ports concurrently. The processor chips will overheat and shut down the port.

The NAS processor and available RAM has a big impact on throughput as does the RAID configuration. I am using RAID 5 with the 951x with its 5 drives for best performance chose best performance over being able to recover with 2 failed drives.

QNAP has a great deal of useful information as does Synology for selecting a NAS for a particular use.

When I was doing massive batch processing jobs I installed two mirrored drives inside the workstation so they could run off the bus and not have reduced throughput from an external port and all the overhead involved with wrapping and unwrapping data containers.

-- hide signature --

"It is horrifying that we have to fight our own government to save the environment." Ansel Adams

 calson's gear list:calson's gear list
Nikon D5 Nikon D850
Keyboard shortcuts:
FForum MMy threads