Quantcast
Channel: High Availability (Clustering) forum
Viewing all 2783 articles
Browse latest View live

Add Node to Hyper-V Cluster running Server 2012 R2

$
0
0

Hi All,

I am in the process to upgrade our Hyper-V Cluster to Server 2012 R2 but I am not sure about the required Validation test.

The Situation at the Moment-> 1 Node Cluster running Server 2012 R2 with 2 CSVs and Quorum. Addtional Server prepared to add to the Cluster. One CSV is empty and could be used for the Validation Test. On the Other CSV are running 10 VMs in production. So when I start the Validation wizard I can select specific CSVs to test, which makes sense;-) But the Warning message is not clear for me "TO AVOID ROLE FAILURES, IT IS RECOMMENDED THAT ALL ROLES USING CLUSTER SHARED VOLUMES BE STOPPED BEFORE THE STORAGE IS VALIDATED". Does it mean that ALL CSVs will be testest and Switched offline during the test or just the CSV that i have selected in the Options? I have to avoid definitly that the CSV where all the VMs are running will be switched offline and also that the configuration will be corputed after loosing the CSV where the VMs are running.

Can someone confirm that ONLY the selected CSV will be used for the Validation test ???

Many thanks

Markus


Clustering across Physical Sites

$
0
0
I'll preface this with I am new to Server 2012R2 and failover clustering.

The end goal is to setup SQL 2014 AlwaysOn for a new Sharepoint setup.

Environment
Server1: MainSite
VMWare ESXi 5.1
Windows Server 2012 R2
IBM DS3500 SAN1 - Fibre

Server 2: DRSite
VMWare ESXi 5.1
Windows Server 2012 R2
IBM DS3500 SAN2 - Fibre

Firstly, is it possible to setup two servers that are in two different sites to use Windows failover clustering?  

How do you share the disk between two different hosts?

Thanks for getting me started on this.

Pre purchase advice on Dell VRTX Cluster config

$
0
0

Hi, I have read quite a few posts on this box and the more I read the more confused I am getting.  

The required end result is an SQL 2012 std AlwaysOn 2 node cluster; a highly available general file and profile share; and a 2012 hyper-v server.

Am I correct in thinking the following...

2 blades (4 procs, 8 cores, 32gb ram) each blade running Server 2012 std, hyper-v enabled running 2 vm's of server 2012,  vm1 (2 cores) file/profile share, vm2 sql (4 cores) all installed onto the blade HDD/SSD. 85 users, business critical for SQl and file / profile share.

1 blade (2 proc, 12 cores, 32gb ram) server 2012 ent, hyper-v enabled.  This will run assorted linux and windows vm's. low usage, non business critical machines.

1 blade (1 proc, 4 cores, 16 gb ram) server 2012 std, scale out file server using 4 of the shared drives in one raid 10 LUN (15k drives), smb 3.x for SQL. 12 shared drives in a separate raid 50 LUN (7.2k NL) for the file/profile share vm's and hyper-v. This leaves hdd's free for global spares and future expansion.

My questions are;

Is this a viable configuration or am I looking at the box the wrong way, having a single blade controlling the LUNS? Should I be looking at putting Server 2012 enterprise on 2 blades, upping the memory to 64gb on them, and running the other 2 blades at a base level as a file cluster?  SQL speed is the most important factor for us, we are about 50/50 read write and use FILESTREAM to store data, this will not change and writes are likely to increase going forwards.

If I go for the 16port internal switch using all M502P blades will I gain a benefit from using Intel SFP+ cards for the 2 SQL blades ?  I have a Netgear GS752TXS stack that supports DA connections.

At the moment our SQL, File Share and profiles are all on separate physical boxes.  We have moved our desktops to VDI on vmware / citrix, expanding that with another Equalogic and couple of nodes will cost considerably more than a VRTX.  I want to remove the remaining single points of failure. I accept the VTRX is still a single box but it will replace a T510, T610 and PE2950 all out of warranty.

Many thanks for any suggestions or advice.

Kane.

"Automaitc" clustering of installed software

$
0
0

(Newbie) Two node cluster, active\passive.

My Manager is under the impression that if we install Failover Clustering (at the OS level) then anything installed on the active\passive nodes, such as Sql Server, will automatically become clustered.  I don't think that is correct but if we did that and the active node failed what would be "missing" from the passive node after it becomes active?

TIAA,

edm2


Live Migration failed - failed to delete configuration: The request is not supported. (0x80070032). Event ID 21502

$
0
0

We have a 3 node cluster attached to a SAN running.  All nodes are running Server 2012. We have 2 virtual machines that will no longer live or quick migrate.  When we try, we get the following error message.

Event ID: 21502

Live migration of 'Virtual Machine Library' failed.

Virtual machine migration operation for 'SRV-XXX' failed at migration source 'NODE1'. (Virtual machine ID 8CC600A0-5491-45B1-896E-E99BB85AA856)

'SRV-XXX failed to delete configuration: The request is not supported. (0x80070032). (Virtual machine ID 8CC600A0-5491-45B1-896E-E99BB85AA856)

We are not having this issue with any of our other 15 virtual machines.  I have searched the forums and have not found any articles with the same situation.

Issue with storage while adding node to cluster

$
0
0

Greetings all,

I started with a 2-node Windows 2012 failover cluster consisting of node1 and node2. To migrate to Windows Server 2012 R2 I created a node3 and installed it as a single node failover cluster. I used copy roles to migrate some VMs and their corresponding CSVs to the new cluster and things work great. Next I moved all remaining VMs to node1 on the original cluster and evicted node2 from the original cluster. So far so good. I then installed a clean copy of Server 2012 R2 onto node2 and attempted to add it to the new cluster and that's when I ran into trouble. It seems as though my shared storage on a Dell MD3000 (I know not supported) is not being seen correctly as the following is what I get when validating storage:

NODE2.adataconcepts.local
Row Disk Number Disk Signature VPD Page 83h Identifier VPD Serial Number Model Bus Type Stack Type SCSI Address Adapter Eligible for Validation Disk Characteristics
0 0 2b4cd3d6 19EBF3B700000000001517FFFF0AEB84 BOOT Intel Raid 1 Volume RAID Stor Port 1:0:1:0 Intel(R) Desktop/Workstation/Server Express Chipset SATA RAID Controller False Disk is a boot volume. Disk is a system volume. Disk is used for paging files. Disk is used for memory dump files. Disk bus type does not support clustering. Disk is on the system bus. Disk partition style is MBR. Disk type is BASIC. 
1 1 e4f31a08 60019B9000B68362000064BD5387F98E 71K002P  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:15:0 Microsoft Multi-Path Bus Driver True Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
2 2 5be62253 60019B9000B683620000638651D2EB2D 71K002P  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:15:6 Microsoft Multi-Path Bus Driver True Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
3 3 5be62245 60019B9000B6933F00000B9351D2E686 71K002P  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:15:7 Microsoft Multi-Path Bus Driver True Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
4 4 c1100e74 60019B9000B6933F00000BE05363B77F 71K002P  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:15:11 Microsoft Multi-Path Bus Driver True Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
5 5 6420dd00 60019B9000B68362000064395363C60A 71K002P  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:15:12 Microsoft Multi-Path Bus Driver True Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
6 6 2cf2c0bc 60019B9000B6933F00000BFF536C698E 71K002P  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:15:13 Microsoft Multi-Path Bus Driver True Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 

NODE3.adataconcepts.local
Row Disk Number Disk Signature VPD Page 83h Identifier VPD Serial Number Model Bus Type Stack Type SCSI Address Adapter Eligible for Validation Disk Characteristics
0 0 eb58d775 759EAC7A01000000001517FFFF0AEB84 Boot Intel Raid 1 Volume RAID Stor Port 1:0:1:0 Intel(R) Desktop/Workstation/Server Express Chipset SATA RAID Controller False Disk is a boot volume. Disk is a system volume. Disk is used for paging files. Disk is used for memory dump files. Disk bus type does not support clustering. Disk is on the system bus. Disk partition style is MBR. Disk type is BASIC. 
1 1 e4f31a08 60019B9000B68362000064BD5387F98E 71K003O  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:2:0 Microsoft Multi-Path Bus Driver True Disk is already clustered. Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
2 2 5be62253 60019B9000B683620000638651D2EB2D 71K003O  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:2:6 Microsoft Multi-Path Bus Driver True Disk is already clustered. Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
3 3 5be62245 60019B9000B6933F00000B9351D2E686 71K003O  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:2:7 Microsoft Multi-Path Bus Driver True Disk is already clustered. Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
4 4 c1100e74 60019B9000B6933F00000BE05363B77F 71K003O  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:2:11 Microsoft Multi-Path Bus Driver True Disk is already clustered. Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
5 5 6420dd00 60019B9000B68362000064395363C60A 71K003O  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:2:12 Microsoft Multi-Path Bus Driver True Disk is already clustered. Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 
6 6 2cf2c0bc 60019B9000B6933F00000BFF536C698E 71K003O  DELL MD3000 Multi-Path Disk Device SAS Stor Port 0:0:2:13 Microsoft Multi-Path Bus Driver True Disk is already clustered. Disk partition style is MBR. Disk type is BASIC. Disk uses Microsoft Multipath I/O (MPIO). 

List Disks To Be Validated
Description: List disks that will be validated for cluster compatibility.
Start: 5/30/2014 5:07:08 PM.
Physical disk e4f31a08 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE2.adataconcepts.local
Physical disk 2cf2c0bc is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE3.adataconcepts.local
Physical disk 6420dd00 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE3.adataconcepts.local
Physical disk c1100e74 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE3.adataconcepts.local
Physical disk 5be62245 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE3.adataconcepts.local
Physical disk 5be62253 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE3.adataconcepts.local
Physical disk e4f31a08 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE3.adataconcepts.local
Physical disk 2cf2c0bc is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE2.adataconcepts.local
Physical disk 6420dd00 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE2.adataconcepts.local
Physical disk c1100e74 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE2.adataconcepts.local
Physical disk 5be62245 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE2.adataconcepts.local
Physical disk 5be62253 is visible from only one node and will not be tested. Validation requires that the disk be visible from at least two nodes. The disk is reported as visible at node: NODE2.adataconcepts.local
No disks were found on which to perform cluster validation tests. To correct this, review the following possible causes:
* The disks are already clustered and currently Online in the cluster. When testing a working cluster, ensure that the disks that you want to test are Offline in the cluster.
* The disks are unsuitable for clustering. Boot volumes, system volumes, disks used for paging or dump files, etc., are examples of disks unsuitable for clustering.
* Review the "List Disks" test. Ensure that the disks you want to test are unmasked, that is, your masking or zoning does not prevent access to the disks. If the disks seem to be unmasked or zoned correctly but could not be tested, try restarting the servers before running the validation tests again.
* The cluster does not use shared storage. A cluster must use a hardware solution based either on shared storage or on replication between nodes. If your solution is based on replication between nodes, you do not need to rerun Storage tests. Instead, work with the provider of your replication solution to ensure that replicated copies of the cluster configuration database can be maintained across the nodes.
* The disks are Online in the cluster and are in maintenance mode.
No disks were found on which to perform cluster validation tests.

    Can anyone shed some light on why even though the disk signatures are the same on both nodes the two nodes don't seem to acknowledge that they are in fact looking at the same disks? Any help greatly appreciated!

    Regards,

    scott


Scott


CSV performance issue with SaS disks

$
0
0
We've been testing Microsoft's famous solution ; storage pools , SaS and finally CSV for a cluster environment IOPS . 

we have SaS disks connected through JBODs and SaS HBA cards . 

before making a CSV ; I used SQLIO and got about 150,000 IOPS which is acceptable ( ReadONLY , 4k , Random ) 

After making the CSV volume , on the coordinator node ; I am getting : 100,000 ( 50,000 less right away ) 

on a noncorordinator node ; I am getting only 25000 IOPS . 

IF I change the owner (and make it coordinator) , then I get 100,000 IOPS again . 

This is so much difference . we don't get actual IOPS that we paid for . we get  25 percent of it . any thoughts ? 

Upgrade from Server 2012 cluster\hyper-v to Server 2012 R2 cluster\hyper-v

$
0
0
Are there any white papers available yet for upgrading from Server 2012 cluster\hyper-v to Server 2012 R2 cluster\hyper-v?

Rob Nunley


NIC teaming and Hyper-V switch recommendations in a cluster

$
0
0

HI,

We’ve recently purchased four HP Gen 8 servers with a total of ten NICS to be used in a Hyper-V 2012 R2 Cluster

These will be connecting to ISCSI storage so I’ll use two of the NICs for the ISCSI storage connection.

I’m then deciding between to options.

 

1. Create one NIC team, one Extensible switch and create VNics for Management, Live Migration and CSV\Cluster - QOS to manage all this traffic. Then connect my VMs to the same switch.

2. Create two NIC teams, four adapters in each.  Use one team just for Management, Live Migration and CSV\Cluster VNics - QOS to manage all this traffic. Then the other team will be dedicated just for my VMs.

Is there any benefit to isolating the VMs on their own switch?

Would having two teams allow more flexibility with the teaming configurations I could use, such as using Switch Independent\Hyper-V Port mode for the VM team? (I do need to read up on the teaming modes a little more)

Thanks,

2012 R2 CSV "Online (No Access)" after node joins cluster

$
0
0

Okay, this has been going on for months after we performed an upgrade on our 2008 R2 clusters.  We upgraded our development cluster from 2008 R2 to 2012 (SP1) and had no issues, saw great performance increases and decided to do our production clusters. At the time, 2012 R2 was becoming prominent and we decided to just hop over 2012, thinking changes in this version weren't that drastic, we were wrong.

The cluster works perfectly as long as all nodes stay up and online.  Live migration works great, roles (including disks) flip between machines based on load just fine, etc.  When a node reboots, or the cluster service restarts, when the node goes from "Down" to "Joining" and then "Online", the CSV(s) will switch from Online to Online (No Access) and the mount point will disappear.  If you were to switch the CSV(s) to the node that just joined back into the cluster, the mount point returns and it goes back to Online.

Cluster validation checks out with flying colors and Microsoft has been able to provide 0 help whatsoever.  We have two types of FC storage, one that is being retired and one that we are switching all production machines to.  It does this with both storage units, one SUN and one Hitachi.  Since we are moving to Hitachi, we verified that the firmware was up-to-date (it is), our drivers are current (they are) and that the unit is fully functional (everything checks out).  This has not happened before 2012 R2 and we have proven it by reverting to 2012 on our development cluster.  We have started using features that come with 2012 R2 on our other clusters so we would like to figure this problem out to continue using this platform.

Cluster logs show absolutely no diagnostic information that's of any help.  The normal error message is:

Cluster Shared Volume 'Volume3' ('VM Data') is no longer accessible from this cluster node because of error '(1460)'. Please troubleshoot this node's connectivity to the storage device and network connectivity.

Per Microsoft our Hitachi system with 2012 R2 and utilizing MPIO (we have two paths) is certified for use.  This is happening on all three of our clusters (two production and one development).  They mostly have the same setup but not sure what could be causing this at this point.

How to delete the default "cluster" folder once the Quorum drive has been re-assigned?

$
0
0

Installed a Windows Server 2008 R2 Failover Cluster.  By default, the Failover Cluster Manager assigned the X:\ drive as the Quorum drive.  Using Failover Cluster Manager, I was able to move\reassign the Quorum drive as, let's say, the N: drive.  However this action does not delete the "Cluster" folder that was initially created on the X: drive.

What is the correct procedure to be rid of the "Cluster" folder that was created on the X: drive when the cluster was initially created without any impact to the existing failover cluster?

The requested object does not exist. (Exception from HRESULT: 0x80010114)

$
0
0

I have a 8 node cluster with Hyper-V, which will be 10 nodes when it's finally done.
Recently I've been try to add nodes and though that went fine, after about a week I could not open the Failover Cluster Manager anymore.
After some checking I found out that the latest added node was giving problems.

VM's on the node still run and function properly, but most Powershell commands result in a "The requested object does not exist. (Exception from HRESULT: 0x80010114)".

I can suspend the node with Suspend-ClusterNode, but draining roles was unsuccesful in one case.
In the other there were no VM's on the node so suspending went fine.

What I did find out was that when I tried to ping the node from another, proper functioning node, it took a while before the pinging started. It felt like the interface had to come back online on the problem node.
After that, I could add the cluster to the Failover Cluster Manager. However, Powershell commands still give a 0x80010114 error or a CIM error for when I use Get-NetAdapter.

A reboot resolves the problem, but only for about a week.

I know there is a topic with the same title already, but the wbemtest en rollup update "answer" is totally unclear to me why I should change something with wbemtest, or why to install updates that to me have nothing to do with this problem.

Before I did the ping test from a functioning node I pinged my DC and another node from the problem node just fine.
No waiting at all.

The cluster has three networks. Management (host only), Live Migration and iSCSI (also a VMSwitch for certain VM's).

I have no idea where to look. Evenviewer doesn't give me anything I can work with that I can find...

Upgrade Windows File Cluster

$
0
0

Hello.

Ive just bought a new server for testing purposes, and i want to setup a Windows File Cluster - almost done :)

The File Cluster is a Two noded Windows Server 2012 R2 Standard Cluster. 

Now i want to know, when the next Windows Server edition ( Windows Server 2014 or Windows Server 2015) gets released - would i then be able to add 2 new nodes to the File Cluster, and remove the 2 old Windows Server 2012 R2 nodes - and let the cluster run?

Or do i have to create a new one?

Its also a requirement, that all the data remains intact on the .vhdx disk from the iSCSI host.


Datatechnician


ARP Storm After NLB Creation on 2012 R2 Cluster

$
0
0

I have a customer with an issue when creating a NLB in multicast mode between two guests on a Hyper-V 2012 R2 cluster. The cluster is on a C7000 chassis with BL460 g8 servers and VC FlexFabric 10Gb/24-port (4.10 f/w).

MAC spoofing is also enabled on both of the guests, when the static ARP entry is created on the core switch, we see an ARP storm on the core switch causing poor performance on the VLAN in question.

Can anyone provide any advice on how we can get around this issue?


Please don't forget to mark posts as helpful or answers.
Inframon Blogs | All Things ConfigMgr

Node and Disk Majority

$
0
0

Dear All,
        Is there a way to configure "Node and Disk Majority" in Windows Server 2012 R2 Failover Cluster? If so, how?

Thanks in advance.


2 Node Failover Cluster - ISCSI Disks as 1 volume?

$
0
0

Hi,

Not sure if I'm in the correct forum. If I am I apologize.  I need some advice.  

I have created a 2-node failover cluster with 2 HP Blades.  I also currently have 2 NAS Servers (HP X1600 24tb servers running 2008 Storage server) -- The ultimate goal would be to combine all of the storage space from the NAS's into 1 volume addressable by the failover cluster. (As well as disk space from any additional NAS's added in the future.)

Right now, I can add the ISCSI disk space from the NAS Targets as different volumes under cluster shared volumes.  Because of the 16TB limit in the ISCSI target, I essentially have 2 ISCSI disks on each NAS. One for 16TB, and the other for 4TB (The NAS Drives are configured for RAID 5 so there's a 4TB Loss.)  So, I have 4 ISCSI disks in the cluster, each as their own volume.

Any thoughts on making the 4 drives addressable as one volume? 

Regards,

-Eric


need help with simple nlb cluster creation

$
0
0

I am very new to sever 2008 let alone clustering...

And i am having a problem....i dont even know if my questions are correct/valid

 

1. So, i have two machines (VMs actually) that run Windows Server 2008 R2 SP1 each with a single Network Interface card. Say M1 and M2

2. I want to make a two-node NLB Cluster out of these two machines

3.  I have turned on the NLB on both the machines from Servermanager.

4. On M1, i have opened the NLB Manager and started creating a new cluster.

     a. Right click on Network Load balancing clusters-->Create new cluster.

     b. First step is to connect a host. since i want this (machine on which i am setting NLB) to be a part of NLB, i gave the machine name and clicked connect

     c. able to connect and it gives me the option to choose the interface to use. I have only one NIC and therefore i have only one option to choose the local area connection.

     d. Next comes the host parameters to add, the dedicated IP Address. So, here i entered the same ip address of this machine. IS THIS CORRECT? (this is where non-clustered traffic comes in?)

     e. Next for the cluster parameters, i have added an unassigned static ip address (given by my admin) as the Cluster Ip Address. IS THIS CORRECT? (this is where the clustered traffic comes in?)

     f. i have chosen the multi-cast mode for NLB clustering and entered the FQDN of the cluster

    f. Now, when i hit next, i am able to setup the cluster BUT, i cannot use it. i mean i cannot add another host because it says un-identfied network

 

pinging the Dedicated IP Address on M1 is working

pinging the cluster IP Address on M1 is working. however ping -a <clusterIPAddress) doesn't resolve the name of the cluster

pinging the Dedicated IP Address of M1 from M2 is NOT working

ping the cluster IP Address form M2 is NOT working.

 

Basically my question is: For dedicated ip address i am using the same IP as of Machine M1 and for cluster IP address i am using some static IP Address..

is that right?

 

any help greatly appreciated....i have read a lot of documentation, but somehow i could not understand it...can anyone simplify this for me

2012 R2, Storage Spaces with SAS & SSD, Tiering on SSD, 2 Nodes Hyper-V Cluster, CSV - Extreme slow access to CSV from opposite node

$
0
0

Hi *.*

I'm experiencing a weird problem.

I've a Fujitsu CX420 S1, a sort of 2 blades server with a shared SAS controller, 4 900Gb SAS disks and 2 200GB SSD disks.

Installed 2012 R2 Std (the server is certified 2012 R2) on both blades, enabled Hyper-V role, configured SS, created a quorum volume without tiering, created a cluster, created a tiered volume, added to CSV, created a VM on it.

If the CSV is, for example on node1 and VM is on the same node, everything works at full speed (200MB/sec write & 300MB/sec read).

If I move the CSV on the opposite node the speed drop to near zero (600Byte/sec write & 20MB/sec read)

It looks like that the CSV is working always in redirected mode and using the HB for passing traffic but not even at 1GBit/sec

Please help!

I'm available for further info, just I'm running out of time to solve the problem (I've to deliver this cluster) before to fall back to the old method of a volume for every VM (no CSV).

Thanks,

Alessio

Question About 2 Node Cluster Setup Windows Server 2012 R2

$
0
0

We have 2 servers that have identical configuration. Each server has 64 GB RAM and we are running Windows Server 2012 R2 Datacenter Edition on each server. Each Server has Hyper-V role and several VMS. We have created DC1 at Server1 and DC2 at Server2. We have Exchange 2013 VMS (EdgeTransport1, MX1) and (EdgeTransport2, MX2) on corresponding servers. We also have SQL Server VM at one of the servers.

We want to configure these two physical servers as nodes of a new cluster. From my knowledge we don't need to have Active Directory to configure these two servers with Failover Cluster. However, the resources I have read, says we won't be able to validate cluster setup.

I want to extend hardware and infrastructure setup so that we can have highly available system.

Can I specify the domain that is hosted by VMS named DC1 and DC2 for Cluster setup?

Because nodes of cluster will be powered prior to VMS, would there be any issue?

If this is an unsupported configuration, then do I really need to buy an additonal server and configure it as Domain Controller for environment?

Also, we have partnership agreement with Microsoft, so we would like to implement System Center products as well. What would be an ideal configuration/topology to achieve our goal for backup/monitoring and centralized management.

Thanks,

Ismet


Windows Clustering Networks question...

$
0
0

Hi all;

This is my scenario:

I have installed Windows Server 2012 on two servers. Then enabled Windows Clustering feature on it. The shared storage is based on Fibre Channel technology. Each server has 4 NICs and I have splitted them as followis:

  • One NIC for remote mangement of the servers with the range of 172.16.105.0/24.
  • One NIC dedicated for heartbeat communication.
  • Two NICs has been bundled together with NIC Teaming feature of the operating system.

But as you see in the following figure there are 4 Cluster Network links:

Is it normal?

Thanks


Please VOTE as HELPFUL if the post helps you and remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread.

Viewing all 2783 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>