Thecus N16000PRO Advanced Testing
Once we already published a Thecus N16000PRO NAS review in which we examined the majority of the device capabilities and measured the key data storage system performance indexes. Neither have we overlooked Thecus D16000 expansion unit that lets the administrator expand the disk space available to the NAS. However, Thecus company also provides an array of other capabilities that allow either for expanding the disk space available to the users to an even greater extent or maintaining fault-tolerant operation of the network storage system. These were the exact capabilities that we promised to review in one of our previous reviews. It's time to deliver on our promises and therefore in this review we will get our users familiar with the following capabilities that we tested with two N16000PRO: stacking, volume expansion, and high availability.
The main point of stacking technology is very simple: one must create an iSCSI partition in one of the NASes, which is later gets connected on the second NAS. Creating of an iSCSI partition is done using iSCSI sub-group, NAS group, in the device web-interface.
In order to add a remote partition one should use NAS Stacking sub-group in the same menu group.
The only thing that one will need to do after its connection is to format the new partition.
The new disk space will become available to the users as a separate folder (depending on the access protocol used).
Naturally, we decided to carry out performance measurements of the NASes upon stacking. The test stand scheme is presented below. The light-green arrow shows the direction of data transfer between the test host and main NAS. The red arrow shows the direction of data transfer between two data storage systems; in other words, in this case a dedicated link between the devices was used.
At first we decided to compare performance of various RAID types upon operation with EXT4 file system. After that we measured the NAS performance for RAID60 upon operation with various file systems: EXT4, BTRFS, and XFS. Results of the measurements are presented on the diagrams below.
Stacking technology allows for connection of a remote iSCSI partition using the same link that is used for connection of users in the way as it's shown on the scheme (the light-green arrow shows connection of a user to the NAS, whilst the red arrow shows the connection between the NASes).
The diagram below demonstrates stacking performance results upon operation through the same link for RAID60 and EXT4 file system.
By looking at the diagram we can see that the presence of a dedicated link between the NASes doesn't lead to any kind of substantial increase in performance of the whole system.
10 GE connection is necessary for operability of the volume expansion feature (Network Management-Networks).
If the device possesses active 10GE connection, one will be able to start creating an expansion unit (NAS-Volume Expansion Wizard). In order to do this one should select the disks, specify the array type and network connection used, and set the address of the main device to which all other Thecus data storage systems will be connected.
Upon successful creation and connection of all expansion units in the network (up to eight units) one will need to perform building of a distributed array (JBOD with XFS).
A created distributed array is almost equal to the local one.
Exactly as in testing the stacking feature, we also connected the NASes using a dedicated link. On the diagrams below one can see results of the measurements of user data transfer speeds upon VE grouping of the devices for SMB and iSCSI protocols.
Also we decided to measure the operation performance of the system for RAID60 through SMB and iSCSI protocols upon connecting the NASes with dedicated and common links.
An increase in the speed of systems with the dedicated link remains within the measurement accuracy, whilst in all other respects (with the exception of one measurement) we can consider the performance of both systems equal.
Also we decided to compare performance of the technology under review upon connection of one expansion unit located on a local and remote device.
Upon connection through SMB protocol there's almost no difference whether the local or remote connections are used, whilst upon connection via iSCSI the disks that were located locally allow for higher access speeds.
Integration of several local disk sets, merged in arrays of various types, came to be a nice capability. The diagrams presented below show the performance of integration of RAID6+RAID60 against integration of single-type RAID6 and RAID-60. Data transfer between N16000PRO was carried out via a dedicated channel.
Upon connection via SMB the difference in performance of RAID6+RAID6, RAID60+RAID60 and RAID6+RAID60 distributed arrays is barely visible, whilst for iSCSI protocol it can be easily notable, which is associated with the performance of iSCSI connection itself.
Apart from the capability of expanding the disk space available to the users, Thecus NASes provide the administrator with a capability of building a fault-tolerant system; in other words, it's a system that maintains high availability of the information stored in it. Obviously, building such a system is not a common operation, and even upon having a toolkit that the owners of Thecus data storage systems possess it can be difficult to get rid of stoppages and system breakdowns. Still it is possible to mitigate their impact on the company workflow in general though the deployment of a fault-tolerant solution is not free. However, let's take our time and tell you about everything in parts.
One of the simplest methods of increasing the system availability is aggregation of network interfaces using which the administrator can protect the system against a one-time failure of the NAS network card, switch port, or patch-cord. Ideally, this kind of a link must be connected not simply to one switch, but to a stack of switches or to various interface cards located in one chassis. Also, the switch can be integrated via VSS technology or they can maintain a vPC (virtual Port Channel) connection for the NAS. However, reviewing these issues falls far beyond the scope of this article. There are also much easier methods of protection against the failure of one of the data storage system interfaces. For example, iSCSI MultiPath is one of them. Unfortunately, the mentioned protection method cannot be called universal since, for example, it's not supported by desktop versions of Microsoft Windows OS. And again we got a bit distracted. But what Thecus can actually offer us? High Availability technology lets administrators not only protect their systems against a failure of one of the links between the data storage system and network, but also maintain fault-tolerant operation of the storage system in case of a failure of the entire system. It would be fair to mention that N16000PRO model also possesses certain internal back-up mechanisms; two independent power supply units and two internal data carriers (flash cards) with the OS of the device are among them. Let's have a closer look at High Availability mechanism.
Two Thecus data storage systems, which are grouped using HA technology, must be connected to the company's network. A dedicated connection is used to carry out the replication of data between the active NAS to the back-up one. The sample scheme of a connection like this is presented below. It's worth noticing that the mentioned scheme won't allow for 100% fault-tolerance since it doesn't consider, let's say, backing-up of the switch or server link. In this situation we are simply reviewing the issue of data storage system performance restoration following a failure of one of the data storage systems.
The continuous light-green line shows the data flow between the client and storage system. The red line shows the data transfer channel about the active system status (Heart Beat). The dash light-green line shows the data flow after reconnecting the users to the back-up NAS.
Actually, the flow of replicative data can pass via the same network infrastructure as the data flows from users. The only thing that one should pay particular attention to is a necessity of usage of a dedicate interface for Heart Beat connection. In other words, the main and back-up NASes must be connected via a high-speed L2 link. The approximate scheme of interaction between the main and back-up systems is presented on the graph below.
Adjustment of High Availability feature is performed using the same-named sub-group in NAS group in the device web-interface.
If necessary, the administrator can set the switching delays onto the backup system.
Settings in the backup data storage system are very simple since upon synchronization all parameters are being sent from the main host.
Upon booting of both devices their statuses are simultaneously synchronized. It stands to mention that all user information that was stored on the devices will be deleted during the synchronization process.
Unfortunately, we couldn't find any method to alter the NAS roles, or in other words swapping the roles of the main and back-up devices with the minimum downtime. An alteration like this may be necessary in case of channel degradation towards the main data storage system. Also, by the moment the vendor doesn't provide any means of firmware update without interrupting the service operation.
As a matter of course, we couldn't help but carry out the performance testing of the mounted fault-tolerant system. The first thing we began with was comparing the RAID performance with EXT4 file system. The choice of this system was due to the fact that it is used by default upon array creation.
We compared access speeds to the user data upon using various file systems for RAID60.
Thecus fault-tolerant securing technology supports operation with the next generation Internet Protocol, or IPv6. Unfortunately, IPv6 support by the NAS cannot be called complete. For example, this way iSCSI stops working in the interface upon turning on IPv6. We hope that the vendor fixes the above-mentioned issue in the next firmware versions. Fortunately, we could get connected to the data storage system using SMB through IPv6. On the diagram presented below one can see comparisons of access speeds to the user data using SMB via IPv4 and IPv6. The device performance upon operation with IPv4 is significantly higher. We hope that the vendor manages to fix the decrease in speeds upon usage of IPv6 that we have spotted.
That is where we bring the review of Thecus high availability technology to a conclusion and pass on to summing it all up.
This time our lab hosted two top Thecus N16000PRO NASes and that's exactly why we just couldn't help but find out their performance upon carrying out certain functions that can be tested using only two devices or more. High Availability, Stacking, and Volume Expansion used for increasing availability of the storage system and expanding the disk space available to the users came to be these functions. We believe that the steps taken by the vendor in order to provide the administrators with advanced functionality are right and highly important. However, by the moment we cannot call the said functionality completely implemented, which is associated with presence of certain limitations in its operation.
The strength areas of the technologies we tested are presented below.
- Ability to integrate several different-type local disk sets into one distributed array using VE feature
- Availability of built-in backing-up mechanisms (two power supply units and two flash cards with the OS)
- Possibility to protect the entire NAS against a failure (upon usage of HA feature)
- High access speeds to the data via iSCSI
- Possibility to expand the continuous disk space approximately up to 4 Pbytes upon building a system made of 640 HDDs (VE of 8 elements (N16000PRO+4xD16000) upon usage of disks with the memory capacity of 6 Tbytes)
Unfortunately, we cannot help to mention several drawbacks we discovered.
- Inability of simultaneous operation of HA and VE
- Absence of fully-fledged IPv6 support
- Inability of creation of several high availability groups at the same time
Also, we cannot help but say a few words about the test stand we used. Unfortunately, since the time when we had tested a single-unit Thecus N16000PRO NAS it underwent certain insignificant changes. The newest updates were installed in the OS and a new CPU was installed. Now instead of Intel i7-4770K we use Intel i7-4790K. Apart from it, we upgraded the NAS firmware up to 2.05.04 version. We used D-Link DGS-3420-28PC switch in order to create the auxiliary network infrastructure. All other primary specifications of the test stand we used are presented below. RAM memory capacity available to the PC OS has been manually decreased by us using msconfig utility so that it complies with Intel recommendations upon operation of NASPT 1.7.1 utility, which we used to carry out all tests.
|Motherboard||ASUS Maximus VI Extreme|
|CPU||Intel Core i7 4790K 4 GHz|
|RAM||DDR3 PC3-10700 SEC 32 Gbytes|
|OS||Windows Server 2008 R2 x64|
Also we decided to provide our readers with information, which we received using the new firmware and new test stand, about the access speeds to the data located on data storage systems under review. The measurements were carried out using SMB and iSCSI protocols via IPv4 for RAID60 disk array with EXT4 file system.
It's not hard to tell that the device performance upon usage of iSCSI is significantly higher. We observed similar results in all other tests that we carried out this time too. Also, we would like to tell once more about the previously measured performance of 4Tbyte HGST Deskstar NAS 0F22408 HDDs we used.
The author and editorial team are grateful to Tayle Company, the official distributor of Thecus network equipment in Russia, for kindly furnishing us with the NASes and HDDs for testing.