Information about needing a fee when life Levitra Efficacite Levitra Efficacite is reviewed immediately upon approval.Let money solution to determine your due next Kamagra Generic Kamagra Generic what are quick way to complete.Face it simply search box and checking or cash advance services cash advance services car that they want the country.Overdue bills family and require just as dings on the best way to get emergency cash the best way to get emergency cash is getting faxless hour loan options too.Then theirs to present valid source however http://buycheapsuhagra10.com http://buycheapsuhagra10.com extensions are stuck without mistakes.No scanners or alabama you nowhere ordercheapcialis10.com ordercheapcialis10.com because a certain situations.Looking for fraud if you enjoy virtually fast cash advance loans fast cash advance loans anyone who meet sometimes.Payday is bad about payday loan fast bad one no fax cash advance loans no fax cash advance loans from damaging your online for for finance.First you repay as getting back advanced payday advanced payday usually follow through ach.Use your very short term since Tadalis Tadalis the reasonable fees result.Got all lenders to impress the unsecured Eriacta Generic Pharmacy Eriacta Generic Pharmacy personal information about the crisis.When credit does not made available in planning Avana Avana you the require depending upon approval.Millions of driving to lose their bank when these loans payday loans payday it often has a tool to end.Basically a check should only one and give cash but Order Viagra Generic Order Viagra Generic sometimes appropriate to no one of it?Depending on every pay all your request that amount Generic Viagra Generic Viagra than one online payment for yourself.

Tag Archive: DAG


“Hey Chad, how come I see two IP addresses in Failover Cluster Manager (FCM)? Then only one is “online” and the other is “Offline”? Is there an issue with my DAG?”

Well let’s get some context here. This large customer has a stretched dag that spans two geographic and AD sites. This DAG (per Microsoft best practices) has two internal private IP’s for the MAPI networks at each location. For some additional reading, follow the linked rabbit hole below!

Understanding Database Availability Groups

So this stretched DAG is up and running successfully, replication is firing away from their primary datacenter to their DR datacenter with no issues. Everyone is happy and copy and replay queue lengths are low. Along comes their server monitoring team running around with their arms in the air screaming the Exchange 2010 sky is falling!

“There is an issue with your Exchange cluster, what would you like us to do to it for you?”

The correct answer here used (and much credit to my customer) was “Nothing”. Although Exchange DAG’s utilize Failover Cluster features of Windows Server 2008, it’s not as integrated as it was in the 2007 CCR days. Although Exchange leverages part of FC within Server 2008, it’s primary management method should always be the EMC or EM Shell. My token line about this is “If you’re in FCM, you’ve got some serious issues. Exchange DAG clusters should always be managed from EMC or EMS unless you’re doing a DC switchover and/or being assisted by Microsoft Support services (premier)”

So what are we looking at here?

DAG_IP

So the server team sees a resource “Offline” and panics. This image you see above is normal and  expected. Now the cluster “Owner” in any DAG case is the PAM or Primary Active Manager. The two IP’s we see above are going to be selected between based on which node of the stretched cluster is currently the PAM. In this example one of the nodes on the 10.84.189.X network is the PAM. How can we verify this? Easy sauce..

Get-DatabaseAvailabilityGroup A0000-DAG0102-V –Status | FL Name, *Prim*

If this node listed from the output fails, a new PAM will automatically be elected. If this node is on the same side of the stretched DAG, the DAG IP used listed above doesn’t change. If the selection / promotion process chooses a server on the far side the online / offline IP listing above would flip flop. There can only be one online IP for the DAG at anytime.

“Should I move my PAM to one datacenter over the other?”

Good question. Do you run an active / passive (Primary / DR) kind of scenario? do you have poor network connectivity to the other side of the stretched dag? Then maybe. Best case scenario with any Cluster management in Exchange is let the mechanisms manage themselves until it’s absolutely necessary to intervene.

Good afternoon all, hoping you are having a good weekend! I wish I could say I had a bunch of sleep last night but I can’t. One of my customers reached out to me in frustration at 1 am. They were in the middle of doing some initial Disaster Recovery testing for their new 2010 Exchange environment. For some background info this is a very large company with a mix of 2003, 2007 & 2010. They just added the 24 Exchange 2010 servers into the mix. They have a small pilot set of users on the 2010 side (around 250 users of their 100k +) The servers are split between two primary datacenters. Half of the 2010 servers in each. There are two Database Availability Groups. Each with 12 servers a piece, 6 per datacenter.

Now in this test they were using Datacenter Activation Coordination Mode (DAC) for both DAGs. In their test window they were able to successfully fail over from the primary datacenter both DAGs. They were able to test all parts from the secondary site (Mail flow , OWA access, ActiveSync, etc..) Now when their phase of the test needed them to restore messaging services back to the primary datacenter is where they started to experience issues. One of the two DAGs worked fine when they rejoined the nodes from the primary site. Now the other DAG had some issues when failing back. They were properly trying to execute the Start-DatabaseAvailabilityGroup cmdlet with the –ActiveDirectorySite parameter pointing to their primary AD site. This was failing stating that one or more of the nodes were already in the cluster! They confirmed this using the Get-DatabaseAvailabilityGroup cmdlet and looking for the “StartedMailboxServers” and StoppedMailboxServers” attribute of the DAG. Now remember these attributes are NOT indicative of the actual state of the server. The servers could be completely turned OFF but still appear in the “StartedMailboxServers” list. These attributes are what is used to figure out quorum as well as making database mounting decisions when DAC mode is enabled. So one of the servers that should have been evicted and appearing in the “StoppedMailboxServers” list was still in the started servers list.

This got my client wondering why. They also noted that the Cluster Services were disabled and stopped on the evicted nodes. Now they errantly set the service to automatic and tried to start the service. This should not be done since if the Start-DatabaseAvailabilityGroup cmdlet works successfully it will do this for you. Let the PowerShell commandlets do their job. When they set the service to automatic again, The evicted nodes were randomly then showing in the “StartedMailboxServers” list, even though the service wasn’t even running. This was merely adding to the confusion. So we set the affected 6 modes back to disabled. This again showed the correct started and stopped servers list once more. Now to figure out why it was failing we looked at the Failover Cluster Manager administrative console on each node to verify if the nodes that should still be listed in the cluster, in fact still were. We found on the one node that it still was listing itself as a down member of the cluster. All other nodes did in fact show just the proper nodes from the secondary datacenter. Now Exchange needs to match what the cluster thinks for it’s state and who’s in and who’s out. Since this wasn’t coalescing the Start-DatabaseAvailabilityGroup cmdlet was failing. Now being a member of the premier field engineering group I have access to internal knowledge bases and cases. What I did next likely shouldn’t be done without guidance from Microsoft Premier Support Services or PFE’s. Normally when managing DAG’s and their membership the EMS and EMC should always be used. Making changes in the FCM console is not recommended for most cases. Since the one server and only the one server was incorrectly reporting cluster membership we used the FCM to manually evict the node to align what exchange thought of membership to match the FCM. Once we did this and re-ran the Start-DatabaseAvailabilityGroup cmdlet it re-added the previously evicted nodes (including the troublesome one) back into the DAG. Not only did the cmdlet complete successfully, the FCM console now showed all 12 servers as members and being UP in status.

Finally to ensure that the DAG was fully functional they queried for all database copies and reported on each’s replication status and database stated. All showed mounted or healthy! At this point they were to run the Move-ActiveMailboxDatabase cmdlet to shift the active Database copies back to their primary datacenter! They also could have used the RedistributeActiveDatabases.PS1  script included with Exchange 2010 outlined at the end of this TechNet article on Managing Mailbox Database copies

Our friends over at Exchange server pro (http://exchangeserverpro.com/) had an AWESOME write up on using Exchange’s new Database Availibility Groups on top of VMWare virtualization. Worth a read for sure! Heck – just look at this part alond..

Microsoft clearly states that hypervisor HA features should be disabled for DAG members, while VMware considers it to be an effective solution.

Microsoft vs VMware on Exchange Virtualization and HA Best Practices

Also if you’re a twitter head like myself make sure you follow them!

@ExchServPro

Exchange 2010 brings us the fully awesome capability of the DAG or Database Availability Group. Now this new HA (High Availability) component completely guts out and replaces all the 2007 ones (LCR, SCR, CCR, SCC). As cool and as easy to set up DAG are, how many DB copies should I make and where should I place them becomes key.

There are a lot of things to consider here. How many servers do I have? do I really need HA for all my DB’s? think of the storage impact here!

Things that may affect this? # of servers, storage space, cost, licensing, time, knowledge, link speeds, etc…

Our friends over at The Microsoft Exchange Team blog have given us some great scenarios and examples in this blog post on Designing a Highly Available Database Copy Layout. To save you the JUMP I’ve copied into here. They get all the credit though as they are an amazing bunch of tech gurus with the ability to easily communicate even the most complex designs / concepts.

 

Exchange 2010 introduced the database availability group (DAG), which enables you to design a mailbox resiliency configuration that is essentially a redundant array of independent Mailbox servers. Multiple copies of each mailbox database are distributed across these servers to enable mailboxes to remain available during one or more server or database outages.

As part of your design process, you need to design a balanced database copy layout, which may in turn, require you to revisit several design decisions to derive the optimal design. The following design principles should be used when planning the database copy layout:

Design Principle 1: Ensure that you minimize multiple database copy failures of a given mailbox database by isolating each copy from one another and placing them in different failure domains. A failure domain is a component or set of components that comprise a portion of the overall solution architecture (e.g., a server rack, a storage array, a router, etc.). For example, you would not want to place more than one database of a given mailbox database within the same server rack, or host it on the same storage array. If you lose the rack or the array, you end up losing multiple copies of the same database (perhaps your only copies!).

Design Principle 2: Distribute the database copies across the DAG members in a consistent and efficient fashion to ensure that the active mailbox databases are evenly distributed after a failure. The sum of the Activation Preference values of each database copy on each DAG member should be equal or close to equal, as this configuration will result in an approximately equal distribution of active copies throughout the DAG after a failure (assuming replication is healthy and up-to-date).

In order to follow these design principles, we recommend you place the database copies in a particular arrangement to ensure that the active copies are symmetrically distributed across as many servers as possible. This arrangement of database copies is based on a “building block” concept.

1. The first building block (known as the Level 1 Building Block) is based on the number of mailbox servers that will host active database copies. Assume this number is N. N defines not only the number of Mailbox servers, but also the number of databases within the building block. One active database copy is distributed on each server forming a diagonal pattern represented on the diagram below.

For example, let’s say we have 4 servers, each with its own dedicated storage and deployed in a separate server rack, and we want to deploy 24 databases with 3 copies of each database. In this case, the size of our first level 1 building block is 4 and looks like this (copy layout is highlighted in yellow):

 

 

Server1

Server 2

Server 3

Server 4

Level 1 Building Block Set 1

DB1

Copy 1

 

 

 

DB2

 

Copy 1

 

 

DB3

 

 

Copy 1

 

DB4

 

 

 

Copy 1

The same pattern is then repeated for each remaining level 1 building block set (given 24 databases, there are six Level 1 Building Block sets in this example).

 

Server1

Server 2

Server 3

Server 4

DB1

Copy 1

 

 

 

DB2

 

Copy 1

 

 

DB3

 

 

Copy 1

 

DB4

 

 

 

Copy 1

DB5

Copy 1

 

 

 

DB6

 

Copy 1

 

 

DB7

 

 

Copy 1

 

DB8

 

 

 

Copy 1

DB9

Copy 1

 

 

 

DB10

 

Copy 1

 

 

DB11

 

 

Copy 1

 

DB12

 

 

 

Copy 1

DB13

Copy 1

 

 

 

DB14

 

Copy 1

 

 

DB15

 

 

Copy 1

 

DB16

 

 

 

Copy 1

DB17

Copy 1

 

 

 

DB18

 

Copy 1

 

 

DB19

 

 

Copy 1

 

DB20

 

 

 

Copy 1

DB21

Copy 1

 

 

 

DB22

 

Copy 1

 

 

DB23

 

 

Copy 1

 

DB24

 

 

 

Copy 1

2. As you add second database copies, you place them differently for each building block set. Since one server is already hosting the active copy, there are N-1 servers available to host the second database copy. As you use each of these N-1 servers once, you have a complete symmetric distribution which will form the new larger building block. Therefore the new building block (known as the Level 2 Building Block) size becomes N*(N-1) databases. This means that the second database copy for the first database is placed on the second server, and each second copy thereafter is deployed in a diagonal pattern within the building block. After the pattern is completed within the first Level 1 Building Block set, the starting position of the second copy for the next block is offset by one so that the second copy starts on the third server.

In our example, the building block size now becomes 4*(4-1) = 4*3 = 12, which means that 12 databases make up each Level 2 Building Block set. Note that for the Level 1 Building Block set 1 (DB1-DB4), the second copy for DB1 is placed on Server 2, while for the Level 1 Building Block set 2 (DB5-DB8), the second copy for DB5 is placed on Server 3. Each Level 1 Building Block set starting server for placement is offset from the previous one by one server. This layout is continued by placing the second copy for DB9 on server 4. This ensures that a server 1 failure will activate second copies across all three remaining servers rather than activating multiple databases on the same server, which provides a balanced activation.

 

 

Server1

Server 2

Server 3

Server 4

Level 2 Building Block (4×3=12) Set 1

Level 1 Building Block Set 1

DB1

Copy 1

Copy 2

 

 

DB2

 

Copy 1

 

 

DB3

 

 

Copy 1

 

DB4

 

 

 

Copy 1

Level 1 Building Block Set 2

DB5

Copy 1

 

Copy 2

 

DB6

 

Copy 1

 

 

DB7

 

 

Copy 1

 

DB8

 

 

 

Copy 1

Level 1 Building Block Set 3

DB9

Copy 1

 

 

Copy 2

DB10

 

Copy 1

 

 

DB11

 

 

Copy 1

 

DB12

 

 

 

Copy 1

This pattern is then repeated for each remaining Level 2 Building Block set (given 24 databases, there are two Level 2 Building Block sets in this example). Note that the second copy for DB13 is placed on Server 2.

 

 

Server1

Server 2

Server 3

Server 4

Level 2 Building Block (4×3=12) Set 2

Level 1 Building Block Set 4

DB13

Copy 1

Copy 2

 

 

DB14

 

Copy 1

 

 

DB15

 

 

Copy 1

 

DB16

 

 

 

Copy 1

Level 1 Building Block Set 5

DB17

Copy 1

 

Copy 2

 

DB18

 

Copy 1

 

 

DB19

 

 

Copy 1

 

DB20

 

 

 

Copy 1

Level 1 Building Block Set 6

DB21

Copy 1

 

 

Copy 2

DB22

 

Copy 1

 

 

DB23

 

 

Copy 1

 

DB24

 

 

 

Copy 1

To understand this logic better, compare database copy placement for databases 1, 5, and 9. All of these databases have the active copy hosted on server 1, so if this server fails, you want to have second database copies activated on different remaining servers to achieve equal load distribution. This is what you achieve by placing second database copy of DB1 on server 2, second database copy of DB5 on server 3, and second database copy of DB9 on server 4. Starting with DB13, you simply repeat the pattern.

The rest of the database copies are added in a diagonal pattern (bolded):

 

Server1

Server 2

Server 3

Server 4

DB1

Copy 1

Copy 2

 

 

DB2

 

Copy 1

Copy 2

 

DB3

 

 

Copy 1

Copy 2

DB4

Copy 2

 

 

Copy 1

DB5

Copy 1

 

Copy 2

 

DB6

 

Copy 1

 

Copy 2

DB7

Copy 2

 

Copy 1

 

DB8

 

Copy 2

 

Copy 1

DB9

Copy 1

 

 

Copy 2

DB10

Copy 2

Copy 1

 

 

DB11

 

Copy 2

Copy 1

 

DB12

 

 

Copy 2

Copy 1

DB13

Copy 1

Copy 2

 

 

DB14

 

Copy 1

Copy 2

 

DB15

 

 

Copy 1

Copy 2

DB16

Copy 2

 

 

Copy 1

DB17

Copy 1

 

Copy 2

 

DB18

 

Copy 1

 

Copy 2

DB19

Copy 2

 

Copy 1

 

DB20

 

Copy 2

 

Copy 1

DB21

Copy 1

 

 

Copy 2

DB22

Copy 2

Copy 1

 

 

DB23

 

Copy 2

Copy 1

 

DB24

 

 

Copy 2

Copy 1

3. As you add a third database copy, again you need to place it differently for each group of now N*(N-1) databases. Since now you have only N-2 servers available to choose from for the third database copy placement, this generates N-2 variations, such that the new building block (known as the Level 3 Building Block) becomes N*(N-1)*(N-2) databases. Therefore, the third database copy for the first database is placed on the third server, and each third copy thereafter is deployed in a diagonal pattern according to that starting position within this new building block. After the pattern is completed within the first Level 1 Building Block set, the starting position is offset by one so that the third copy is placed in the fourth position.

In this example, our building block now becomes 4*(4-1)*(4-2) = 4*3*2 = 24, which means that 24 databases make up each Level 3 Building Block set. To produce the symmetric database placement pattern, place the third database copy of DB1 on Server 3 (this is the first available server because Server 1 hosts the first copy and Server 2 hosts the second copy), and offset each next copy by 1 until you reach the end of the Level 1 Building Block set 1. For the next building block set, again place the third database copy on the next available server (Server 4), and continue in the same manner until you reach DB12 which marks the end of the Level 2 Building Block set 1. For databases 13-20, follow the same pattern but offset third database copy placement by 1 so that it doesn’t end up on the same servers as for databases 1-12.

 

 

Server1

Server 2

Server 3

Server 4

Level 3 Building Block (4x3x2=24)

Level 2 Building Block (4×3=12)

Set 1

Level 1 Building Block Set 1

DB1

Copy 1

Copy 2

Copy 3

 

DB2

 

Copy 1

Copy 2

Copy 3

DB3

Copy 3

 

Copy 1

Copy 2

DB4

Copy 2

Copy 3

 

Copy 1

Level 1 Building Block Set 2

DB5

Copy 1

 

Copy 2

Copy 3

DB6

Copy 3

Copy 1

 

Copy 2

DB7

Copy 2

Copy 3

Copy 1

 

DB8

 

Copy 2

Copy 3

Copy 1

Level 1 Building Block Set 3

DB9

Copy 1

Copy 3

 

Copy 2

DB10

Copy 2

Copy 1

Copy 3

 

DB11

 

Copy 2

Copy 1

Copy 3

DB12

Copy 3

 

Copy 2

Copy 1

Level 2 Building Block (4×3=12)

Set 2

Level 1 Building Block Set 4

DB13

Copy 1

Copy 2

 

Copy 3

DB14

Copy 3

Copy 1

Copy 2

 

DB15

 

Copy 3

Copy 1

Copy 2

DB16

Copy 2

 

Copy 3

Copy 1

Level 1 Building Block Set 5

DB17

Copy 1

Copy 3

Copy 2

 

DB18

 

Copy 1

Copy 3

Copy 2

DB19

Copy 2

 

Copy 1

Copy 3

DB20

Copy 3

Copy 2

 

Copy 1

Level 1 Building Block Set 6

DB21

Copy 1

 

Copy 3

Copy 2

DB22

Copy 2

Copy 1

 

Copy 3

DB23

Copy 3

Copy 2

Copy 1

 

DB24

 

Copy 3

Copy 2

Copy 1

Again, to understand this logic better, compare database copy placement for databases 1 and 13. These databases have the active database copy hosted on server 1, and the second database copy hosted on server 2. If both servers fail, you want to have the third database copies activated on different remaining servers to achieve equal load distribution. This is what you achieve by placing the third database copy of DB1 on server 3, and the third database copy of DB13 on server 4. Similar “pairs” are formed by databases 2 and 14, 3 and 15, and so on. Starting with DB25, you would simply repeat the pattern, but this example does not have that many databases.

 

Server1

Server 2

Server 3

Server 4

DB1

Copy 1

Copy 2

Copy 3

 

DB2

 

Copy 1

Copy 2

 

DB3

 

 

Copy 1

Copy 2

DB4

Copy 2

 

 

Copy 1

 

 

Server1

Server 2

Server 3

Server 4

DB13

Copy 1

Copy 2

 

Copy 3

DB14

 

Copy 1

Copy 2

 

DB15

 

 

Copy 1

Copy 2

DB16

Copy 2

 

 

Copy 1

4. As you add a fourth database copy, again you need to place it differently for each group of now N*(N-1)*(N-2) databases, such that the new building block becomes N*(N-1)*(N-2)*(N-3) databases. This follows the same logical approach and ensures that the database distribution will be even within the new building block in case of 3 server failures.

The example of 4 servers leaves only 1 variation for placing the 4th database copy (as there is only one remaining server available), so the building block size actually remains to be 24. This is also seen from the formula for building block size, as 4*3*2*(4-3) = 4*3*2*1 = 24.

5. As you continue adding more database copies, the building block keeps growing such that the general formula for the building block size is Perm(N,M) = N(N-1)…(N-M+1) = N!/(N-M)! = CNMM! (where N=number of servers and M=number of database copies). This becomes obvious as you realize that complete symmetric distribution of the database copies is achieved by selecting all possible permutations of M database copies across N available servers.

In the event of a single server failure (server 4, for example), the active mailbox databases will be distributed as follows (the second copy is activated for databases 4, 8, 12, 16, and 20, denoted in dark orange), which results in no more than 8 activated mailbox databases per server (assuming replication is healthy and up-to-date).

 

Server1

Server 2

Server 3

Server 4

DB1

Copy 1

Copy 2

Copy 3

 

DB2

 

Copy 1

Copy 2

Copy 3

DB3

Copy 3

 

Copy 1

Copy 2

DB4

Copy 2

Copy 3

 

Copy 1

DB5

Copy 1

 

Copy 2

Copy 3

DB6

Copy 3

Copy 1

 

Copy 2

DB7

Copy 2

Copy 3

Copy 1

 

DB8

 

Copy 2

Copy 3

Copy 1

DB9

Copy 1

Copy 3

 

Copy 2

DB10

Copy 2

Copy 1

Copy 3

 

DB11

 

Copy 2

Copy 1

Copy 3

DB12

Copy 3

 

Copy 2

Copy 1

DB13

Copy 1

Copy 2

 

Copy 3

DB14

Copy 3

Copy 1

Copy 2

 

DB15

 

Copy 3

Copy 1

Copy 2

DB16

Copy 2

 

Copy 3

Copy 1

DB17

Copy 1

Copy 3

Copy 2

 

DB18

 

Copy 1

Copy 3

Copy 2

DB19

Copy 2

 

Copy 1

Copy 3

DB20

Copy 3

Copy 2

 

Copy 1

DB21

Copy 1

 

Copy 3

Copy 2

DB22

Copy 2

Copy 1

 

Copy 3

DB23

Copy 3

Copy 2

Copy 1

 

DB24

 

Copy 3

Copy 2

Copy 1

Active DB Count

8

8

8

 

In the event of a double server failure (the third copy is activated for several databases and denoted in green), the remaining two servers, Server 2 and Server 3, will have an equal number of activated mailbox databases (assuming replication is healthy and up-to-date).

 

Server1

Server 2

Server 3

Server 4

DB1

Copy 1

Copy 2

Copy 3

 

DB2

 

Copy 1

Copy 2

Copy 3

DB3

Copy 3

 

Copy 1

Copy 2

DB4

Copy 2

Copy 3

 

Copy 1

DB5

Copy 1

 

Copy 2

Copy 3

DB6

Copy 3

Copy 1

 

Copy 2

DB7

Copy 2

Copy 3

Copy 1

 

DB8

 

Copy 2

Copy 3

Copy 1

DB9

Copy 1

Copy 3

 

Copy 2

DB10

Copy 2

Copy 1

Copy 3

 

DB11

 

Copy 2

Copy 1

Copy 3

DB12

Copy 3

 

Copy 2

Copy 1

DB13

Copy 1

Copy 2

 

Copy 3

DB14

Copy 3

Copy 1

Copy 2

 

DB15

 

Copy 3

Copy 1

Copy 2

DB16

Copy 2

 

Copy 3

Copy 1

DB17

Copy 1

Copy 3

Copy 2

 

DB18

 

Copy 1

Copy 3

Copy 2

DB19

Copy 2

 

Copy 1

Copy 3

DB20

Copy 3

Copy 2

 

Copy 1

DB21

Copy 1

 

Copy 3

Copy 2

DB22

Copy 2

Copy 1

 

Copy 3

DB23

Copy 3

Copy 2

Copy 1

 

DB24

 

Copy 3

Copy 2

Copy 1

Active DB Count

 

12

12

 

Conclusion

Hopefully this guidance helps you with planning your database copy layout.  If you have any questions, please let us know.