Quantcast
Channel: You Had Me At EHLO…
Viewing all 607 articles
Browse latest View live

Troubleshooting High CPU utilization issues in Exchange 2013

$
0
0

Introduction

In Exchange support we see a wide range of support issues. Few of them can be more difficult to troubleshoot than performance issues. Part of the reason for that is the ambiguity of the term "Performance Issue". This can manifest itself like anything from random client disconnects to database failovers or slow mobile device syncing. One of the most common performance issues we see are ones where the CPU is running higher than expected. "High CPU" can also be a bit of an ambiguous term as well. What exactly is high? How long does it occur? When does it occur? All of these are questions that have to be answered before you can really start getting to the cause of the issue. For example, say you consider ‘high’ to be 75% of CPU utilization during the day. Are you experiencing a problem, are databases inadequately balanced, or is the server just undersized? What about a 100% CPU condition? Does it happen for 10 seconds at a time or 10 minutes at a time? Does it only happen when clients first logon in the morning or after a failover? In this article I'll go into some common causes of high CPU utilization issues in Exchange 2013 and how to troubleshoot them.

At this point I should note that this article is about Exchange 2013 specifically, not earlier versions. High CPU issues across versions do have some things in common, however much of the data in this article is specific to Exchange 2013. There are some fairly significant differences between Exchange 2010 and Exchange 2013 that change the best practices and troubleshooting methodology. Some of these include completely different megacycle requirements, different versions of the .NET Framework, and different implementation of .NET Garbage Collection. Therefore, I will not be covering Exchange 2010 in this post.

Common Configuration Issues

Those of us that have worked enough performance issues start by following a list of things to check first. This was actually the main motivation for a TechNet article we recently published called Exchange Server 2013 Sizing and Configuration Recommendations. I'm not going to duplicate everything in the article here, I would suggest that you read if you are interested in this topic. I will however touch on a few of the high points.

.NET Framework version

Exchange 2013 runs on version 4.5 of the .NET Framework. The .NET team has published updates to .NET 4.5, released as versions 4.5.1 and 4.5.2. All of these versions are supported on Exchange 2013. However, I would strongly recommend that 4.5.2 be the default choice for any Exchange 2013 installation unless you have very specific reasons not to use it. There have been multiple performance related fixes from version to version, some of which impact Exchange 2013 fairly heavily. We've seen more than a few of these in support. You can save yourself a lot of trouble by upgrading to 4.5.2 as soon as possible, if you are not already there. It should also be noted that 4.5.2 is the latest version as of the publishing of this blog post. Future releases will contain even more improvements so be sure to always check for the latest available version. You can read more about the different versions of the .NET Framework here.

Power Management

I started losing count a while back of the number of high CPU cases I encountered that were caused by misconfigured power management. Power management sounds like a good thing, right? In many cases it is. Power management allows the hardware or the OS to, among other things, throttle power to the CPU and turn off an idle network card when it isn't in use. On workstations and perhaps on certain servers this can be a good thing. It saves power, lowers the electric bill, gives you a nice low carbon footprint, and makes vegetables taste good. So why is this a bad thing? Consider this. You have a server running at about 80% CPU throughout the work day consistently. You've ran the sizing numbers over and over and you should be closer to 55%. You don't see any unusual client activity. Everything looks great except the CPU utilization. Now what if you were to find out that your 2.4GHz cores are only operating at 1.2GHz most of the time? That might make a difference in your reported CPU utilization. For Exchange the guidance is straight forward. If hardware power management is an option, don't use it. You should allow the operating system to manage power and you should always use the "High performance" power plan in Windows. Even if you aren't using hardware based power management, just having the power plan set to the default "Balanced" can be enough to throttle the CPU power.

How do you know if this is happening? On a physical server the answer is easy. There is a counter in performance monitor called "Processor Information(_Total)\% of Maximum Frequency". This should always be at 100. Anything lower indicates that the CPU is being throttled which is usually a result of some kind of power management, either at the hardware or OS level. On a virtual server things get a bit more complicated. To the Exchange server, a VM guest, it is difficult to completely trust the CPU performance numbers. If power is being throttled at the VM Host layer, it will not be overly apparent to the Guest OS. You need to use the performance monitoring tools of the VM Host to check for processor power throttling.

Screenshot of CPU throttling in Perfmon:

image

Health Checker

We've recently published a PowerShell script on the TechNet gallery that makes checking for common configuration issues easy. The script reports Hardware/Processor information, NIC settings, Power plan, Pagefile settings, .NET Framework version, and some other items. It also has a Client Access load balancing check (current connections per server) and a Mailbox Report (active/passive database and mailbox total per server). It can be executed remotely and can run against all servers in the Organization at once, to save the trouble of having to check all of these settings individually on each server. The TechNet gallery posting contains more details on the script as well as some common usage syntax.

Sizing

After we've ruled out the common causes from the previous section, we now have to move on to sizing. Perhaps the CPU is running high because the server doesn't have enough megacycles to keep up with the load being placed on it. Sizing Exchange 2013 is covered in multiple blog posts.. If you want a good understanding of sizing, I suggest reading Jeff Mealiffe’s post Ask the Perf Guy: Sizing Exchange 2013 Deployments. If you haven't done it already, you should also run through Ross Smith IV's sizing calculator. Most deployments have utilized the calculator for planning and sizing. I'm a support guy so I'm approaching this topic from the angle of troubleshooting an existing environment. In the world of troubleshooting we don’t need to size and plan a deployment, but we do need to know enough about it to know if a performance problem is simply an issue of being undersized. Troubleshooting a high CPU issue with no knowledge of sizing can at best be difficult and many times just not possible. When it comes to CPU sizing it comes down to this question - do I have enough available megacycles to handle the load?

Easy enough right? Not quite. How many available megacycles you have is fairly straight forward, although it does require a bit of math. The basic formula (taken directly from Jeff's sizing blog) is as follows:

image

Two of these numbers are already known. The MHz per-core of the baseline platform is always 2000, and the Baseline per-core score value is always 33.75. Again, this is specific to Exchange 2013 only. All you need now is your target platforms per-core score value. This value is the SPECInt 2006 rating of your server divided by the total number of physical cores. If you don't want to use the website you can look up your server's rating with the Exchange Processor Query Tool. Say our SPECInt 2006 rating on a 12 core server is 430, giving us a per-core rating of 35.83 (430/12). The formula now looks like this:

image

2123.26 megacycles per-core, times 12 cores, gives you 25,479 total megacycles available. Now we have to find out the required megacycles. This is a bit more complicated. It depends on the number of active and passive mailboxes you have along with message profile (messages sent/received per day) and any multipliers that may be required by 3rd party products. Luckily, there is a script to help with this as well.

The Exchange 2013 CPU Sizing Checker will run these numbers for you. You can pass in all of the profile information but it is easier to just import the values directly from your sizing calculator results. Syntax can be found on the download page.

Screenshot of the Sizing Checker:

image

Version 7.2 of the Sizing Calculator also allows us to get an idea of the expected CPU utilization. The difference is it will calculate expected CPU utilization based on the number of active and passive mailboxes planned by taking the values from the Input page of the spreadsheet (as opposed to querying the mailbox server for a current total). The new features in version 7.2 provide insight that lets you know what to expect from a CPU utilization standpoint in multiple different scenarios that include Normal Runtime (no failures, evenly distributed databases), Single Failure (a single server in the datacenter has failed, resulting in database copy activation), Double Failure (two servers in the datacenter have failed, resulting in database copy activation), Site Failure (a datacenter has failed, requiring failover to another datacenter), and Worst Failure (worst possible failure based on design requirements for the environment).

Message Profile and Multiplier

By now you're probably saying "this is nice, but how do I know my message profile and multiplier numbers?" Great question. The message profile numbers on a live production deployment can actually be determined by yet another great script from Dan Sheehan called Generate-MessageProfiles.ps1, available on TechNet Gallery. This script will parse your transport logs and give you an actual number of messages sent/received per day. In addition to publishing the script, Dan has written a blog post that explains the script and its usage in detail.

That works for message profiles. What about the multiplier? This is the tough one. Some 3rd party vendors will actually give you a suggested multiplier for their software. Sometimes this information is not available. In this case you can use the previously referenced Exchange 2013 CPU Sizing Checker script to reverse engineer the multiplier. Let's say you run the script with a multiplier of 1.0. It gives you a CPU number of 50% which is the average CPU usage you can expect from the Exchange specific processes during the busiest hours of the day. You, however, are seeing a value closer to 65%. You can run the script again, modifying the multiplier, until you get a result close to 65%. Once you do, that can give you an idea of what multiplier number you should be using in your sizing plans.

As previously mentioned, version 7.2 of the sizing calculator has the ability to predict CPU values based on your planned deployment numbers. This means that you can modify the “Megacycles Multiplication Factor” in the profile settings on the calculator’s Input tab and view the results in the “CPU Utilization/Dag” section on the Role Requirements tab to get an idea of which multiplier value suits your deployment best. In most cases this is preferable to using the script as the calculator is faster and designed around helping you plan your deployment (as opposed to the script which is more for troubleshooting).

Oversizing

Contrary to what you may think, it is possible to oversize your servers from a CPU standpoint. This doesn't come down to raw processing power. It might be inefficient use of hardware in some cases to deploy on servers with high core counts, but too much processing power isn't the problem. When I talk about oversizing I'm not really talking about the available megacycles more than I am the number of cores. Exchange 2013 was developed to run on commodity type servers. Testing is generally done on servers with processor specifications of 2 sockets and about 16-20 cores. This means that if you deploy on servers with a much larger core count you may run into scalability issues. Core count is used to determine settings at the application level that can make a difference in performance. For example, in processes that use Server mode Garbage Collection we will create one managed heap per core (you can read in detail about Garbage Collection in .NET 4.5 here). This can significantly increase the memory footprint of the process and it goes up the more cores you have. We also use core count to determine the minimum number of threads in the threadpool of many of our processes. The default is 9 per core. If you have a 32 core server, that's 288 threads. If, for example, there is a sudden burst of activity you could have a lot of threads trying to do work concurrently. Some of the locking mechanisms for thread safety in Exchange 2013 were not designed to work as efficiently in high core count scenarios as they do in the recommended core count range. This means that under certain conditions, having too many cores can actually lead to a high CPU condition. Hyper-Threading can also have an effect here since a 16 core Hyper-Threaded server will appear to Exchange as having 32 cores. This is one of the multiple reasons why we recommend leaving Hyper-Threading disabled. These are just a few examples but they show that staying within the recommendations made by the product group when it comes to server sizing is extremely important. Scaling out rather than up is better from a cost standpoint, a high availability standpoint, and from a product design standpoint.

Single Process Causing High CPU

Generally if you have a CPU throttling issue or are undersized, you will see high CPU that will not seem to be caused by a single process. Rather, the server just looks "busy". The CPU utilization is high, but no single process appears to be the cause. There are times though where a single process can be causing the CPU to go high. In this section we will go over some tricks with performance monitor to narrow down the offending process and dig a bit into why it may be happening.

Perfmon Logs

Perfmon is great, but what if you were not capturing perfmon data when the problem happened? Luckily Exchange 2013 includes the ability to capture daily performance data and this feature is turned on by default. The logs are usually located in Exchange Server installation folder under “V15\Logging\Diagnostics\DailyPerformanceLogs”. These are binary log (*.blg) files that are readable by perfmon.exe. To review one just launch perfmon, go to Monitoring Tools\Performance Monitor, click the “View Log Data” button, and under Data Source select “Log Files”, click add, and browse to the file you wish to view. The built in log capturing feature has to balance between gathering useful data and not taking up too much disk space so it does not capture every single counter and it only captures on a one minute interval. In most cases this is enough to get started. If you find you need a more robust counter set or a shorter sample interval you can use ExPerfWiz to setup a more custom capture. A tip here: if you want to collect this information regularly and from multiple servers, check out this blog post.

Perfmon Analysis

The very first counter I load when analyzing a perfmon log for a high CPU issue is "Process(_Total)\% Processor Time". It gives you an idea of the total CPU utilization for the server. This is important because first and foremost, you need to make sure the capture contains the high CPU condition. With this counter a CPU utilization increase should be easy to spot. If it was a brief burst you can then zoom into the time that it happened to get a closer look at what else was going on at the time. I'll note the difference between Process(_Total) and Processor(_Total). Processor is based on a scale of 0-100 (CPU usage in overall percentage). Process(_Total) is based on the core count of the server. If you have a 16 core server, a 100% CPU spike would have a value of 1600. Either one can be used to start, as long as you realize the difference. If you are looking at a perfmon capture and don't know the total number of cores, just look at the highest number in the instances window under the Processor counter. It is a zero based collection, each number representing a core. If 23 is the highest number, you have 24 cores. During this phase of troubleshooting it may be best to change the vertical scale of the perfmon window. To do this right click in the window, properties, graph tab, change the maximum to core count x 100. In our 16 core example you would change it to 1600.

Now that you know that there was a high CPU condition and when it occurred, we can start narrowing down what caused it. The next thing to do is load all instances under "Process\% Processor Time". You can ignore "_Total" as we're already using it as our measurement for overall CPU utilization. You can also ignore Idle for now as it will inversely mirror "_Total". Look for any specific process that goes up in tandem with the overall CPU utilization. If there isn't one in particular, you don't have a single process causing the issue. This tends to point to some of topics covered in the previous sections such as sizing, load, and CPU throttling.

Mapping w3wp instances to application pools

Let's say you do find one particular process that is causing the high CPU condition. Suppose that the process has the name "w3wp#1". What exactly are you supposed to do with that? Exchange runs multiple application pools in IIS for the various protocols it supports. We need to find out which application pool "w3wp#1" maps to. Luckily perfmon has the information we need, you just need to know how to find it.

The first thing you want to do is load the counter "Process(w3wp#1)\ID Process". This will give you the process ID (PID) of that w3wp instance. Let's say it's 22480. With that information we go back to the counter load screen and look under "W3SVC_W3WP". Click on any of the counters. Below you will see a window that contains entries with the format PID_AppPool. In our example it says 22480_MSExchangeSyncAppPool. That tells us that w3wp#1 belongs to the Exchange ActiveSync application pool. Now we know that ActiveSync is the cause of our high CPU. At this point you can remove all of the counters from your view except for "Process(w3wp#1)\% Processor Time" as the extra clutter is no longer needed. You may also want to set the vertical scale back to 100 and right click on the counter and choose "Scale Selected Counters".

I should also note here that due to managed availability health checks, sometimes an application pool is restarted. When this happens the PID and the w3wp instance may change. Pay attention to the “Process(w3wp*)\ID Process” counter for the worker process you are interested in. If this value changes that means the process was recycled, the PID changed, and perhaps the w3wp instance as well. You will need to verify if the instance changed after the process recycled to make sure you are still looking at the right information.

What is the process doing?

Now that we've narrowed it down to w3wp#1 and know that ActiveSync is the cause of our issue, we can start to dig into troubleshooting it specifically. These methods can be used on multiple other application pools but this example will be specific to ActiveSync. The most common thing to look for is burst in activity. We can load up the counter "MSExchangeActiveSync\Requests /sec" to see if there was an increase in requests around the time of the problem. Whether there was or was not, we now know if increased request traffic led to the CPU increase. If it did, we need to find the cause of the traffic. It's a good idea to check the counter "MSExchange IS Mailbox(_Total)\Messages Delivered /sec". If this ticks up right before the CPU increase, it tells you that there was a burst of incoming messages that likely triggered it. You can then review the transport logs for clues. If it wasn't message delivery it may have been some mobile device activity that caused it. In this case you can use Log Parser Studio to analyze the IIS logs for trends in ActiveSync traffic.

Garbage Collection (GC)

If there was no noticeable increase in request traffic or message delivery before the increase, there may be something inside the process causing it. Garbage collection is a common trigger. You can look at ".NET CLR Memory(w3wp#1)\% Time in Garbage Collection". If it sustains higher than 10% during the issue it could trigger high CPU. If this is the case also look at ".NET CLR Memory(w3wp#1)\Allocated Bytes /sec". If this counter sustains about 50,000,000 during the high CPU condition and is coupled with an increase in "% Time in Garbage Collection", it means the Garbage Collector may not be able to keep up with the load being placed on it. I want to note very clearly here that if you encounter this, Garbage Collection throughput usually isn't the root of the problem. It is another symptom. Increases of this type usually indicate abnormal load is being placed on the system. It is much better to find the root cause of this and eliminate it rather than to start changing the garbage collector settings to compensate.

RPC Operations/sec

This is perhaps the best counter we have in mapping client activity to high CPU. You can load up "MSExchangeIS Client Type(*)\RPC Operations /sec" to get an idea of how many RPC requests are being issued against the Information Store by client type. Usually the highest offenders will be momt (Requests from the RPC Client Access Service, usually Outlook MAPI clients), contentindexing, webservices (EWS), and transport (mail delivery). You really need to have a baseline of your environment to know what "normal" is but you can definitely use this counter to compare to the overall CPU utilization to see if client requests are causing a CPU utilization increase.

Log Parser Studio (LPS)

If I were stuck on a desert island and had to troubleshoot Exchange performance issues for food, and could only bring two tools, they would be perfmon and Log Parser Studio. LPS contains several built in queries to help you easily analyze traffic for the various protocols used by Exchange. You can use it to get a view of the most ActiveSync hits per day by device, EWS requests by client type, RPC Client Access MAPI client version by percentage, and many others. The built in queries are great for just about anything you'd need to find out. If you need more and know a bit of TSQL, you can even write your own. LPS is covered in depth in Kary Wall's blog post. If you get to the point where you have the client type causing your issue narrowed down, LPS is usually the next step.

Conclusion

Performance is a vast topic and I don't expect this blog post will make you an expert immediately, but hopefully it has given you enough tips and tricks to start tracking down Exchange 2013 high CPU issues on your own. If there are other topics you would like to see us blog about in the realm of Exchange performance please leave feedback below. Happy troubleshooting!

Marc Nivens


New Support Policy for Repaired Exchange Databases

$
0
0

The database repair process is often used as a last ditch effort to recover an Exchange database when no other means of recovery is available. The process should only be followed at the advice of Microsoft Support and after determining that all other recovery options have been exhausted. For many years in many versions of Exchange, the repair process has largely been the same. However, that process is changing, based on information Microsoft has gathered from an extensive analysis of support cases.

In short, Microsoft is changing the support policy for databases that have had a repair operation performed on them. Originally a database was supported if the repair was performed using ESEUTIL and ISINTEG/repair cmdlets. Under the new support policy, any database where the repair count is greater than 0 will need to be evacuated – all mailboxes on such a database will need to be moved to a new database.

Existing Repair Process

The process consists of three steps:

  1. Repair the database at the page level
  2. Defragmentation of the database to restructure and recreate the database
  3. Repair of the logical structures within the database

Step 1 of the repair process is accomplished by using ESEUTIL /p. This is typically performed when there is page level corruption in the database - for example, a -1018 JET error, or when a database is left in dirty shutdown state as the result of not having the necessary log files to bring the database to a clean shutdown state. After executing ESEUTIL /p you are prompted to confirm that data loss may result. Selecting OK is required to continue.

db1

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /p '.\Mailbox Database.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating REPAIR mode...
Database: .\Mailbox Database.edb
Temp. Database: TEMPREPAIR4520.EDB
Checking database integrity.
The database is not up-to-date. This operation may find that this database is corrupt because data from the log files has yet to be placed in the database. To ensure the database is up-to-date please use the 'Recovery' operation.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
.
Rebuilding MSysObjectsShadow from MSysObjects.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Checking the database.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Scanning the database.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Repairing damaged tables.
Scanning Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Repair completed. Database corruption has been repaired!
Note:
It is recommended that you immediately perform a full backup of this database. If you restore a backup made before the repair, the database will be rolled back to the state it was in at the time of that backup.
Operation completed successfully with 595 (JET_wrnDatabaseRepaired, Database corruption has been repaired) after 30.187 seconds.

At this point, the database should be in a clean shutdown state and the repair process may proceed. This can be verified with ESEUTIL /mh.

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /mh '.\Mailbox Database.edb'
State: Clean Shutdown

Step 2 is to defragment the database using ESEUTIL /d. Defragmentation requires significant free space on the volume that will host the temporary database (typically 110% of the size of the database must be available as free disk space).

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /d '.\Mailbox Database.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating DEFRAGMENTATION mode...
Database: .\Mailbox Database.edb
Defragmentation Status (% complete)
0 10 20 30 40 50 60 70 80 90 100
|----|----|----|----|----|----|----|----|----|----|
...................................................
Moving 'TEMPDFRG3620.EDB' to '.\Mailbox Database.edb'... DONE!
Note:
It is recommended that you immediately perform a full backup of this database. If you restore a backup made before the defragmentation, the database will be rolled back to the state it was in at the time of that backup.
Operation completed successfully in 7.547 seconds.

Step 3 is the logical repair of the objects within the database. The method used to accomplish this varies by Exchange version.

In Exchange 2007, ISINTEG is used to perform the logical repair, as illustrated in the following example:

C:\>isinteg -s wingtip-e2k7 -fix -test alltests -verbose -l c:\isinteg.log
Databases for server wingtip-e2k7:
Only databases marked as Offline can be checked
Index Status Database-Name
Storage Group Name: First Storage Group
1 Offline Mailbox Database
Enter a number to select a database or press Return to exit.
1
You have selected First Storage Group / Mailbox Database.
Continue?(Y/N)y
Test Categorization Tables result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Restriction Tables result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Search Folder Links result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s);time: 0h:0m:0s
Test Global result: 0 error(s); 0 warning(s); 0 fix(es); 1 row(s); time: 0h:0m:0s
Test Delivered To result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Repl Schedule result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time:0h:0m:0s
Test Timed Events result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test reference table construction result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Folder result: 0 error(s); 0 warning(s); 0 fix(es); 4996 row(s); time: 0h:0m:2s
Test Deleted Messages result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Message result: 0 error(s); 0 warning(s); 0 fix(es); 1789 row(s); time: 0h:0m:0s
Test Attachment result: 0 error(s); 0 warning(s); 0 fix(es); 406 row(s); time: 0h:0m:0s
Test Mailbox result: 0 error(s); 0 warning(s); 0 fix(es); 249 row(s); time: 0h:0m:0s
Test Sites result: 0 error(s); 0 warning(s); 0 fix(es); 996 row(s); time: 0h:0m:0s
Test Categories result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Per-User Read result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time:0h:0m:0s
Test special folders result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Message Tombstone result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Test Folder Tombstone result: 0 error(s); 0 warning(s); 0 fix(es); 0 row(s); time: 0h:0m:0s
Now in test 20(reference count verification) of total 20 tests; 100% complete.
Typically when ISINTEG completes, it advises reviewing the isinteg.log file. At the end of the file is a summary section, listing the number of errors encountered. If the number of errors is greater than zero, you need to re-run the command. Continued repairs need to be performed until the error count reaches 0 or the same number of errors is encountered after two executions.
. . . . . SUMMARY . . . . .
Total number of tests : 20
Total number of warnings : 0
Total number of errors : 0
Total number of fixes : 0
Total time : 0h:0m:3s

In Exchange 2010 and later, ISINTEG was deprecated and certain functions were replaced by the New-MailboxRepairRequest and New-PublicFolderDatabaseRepairRequest cmdlets, both of which allow for repair operations to occur while the database is online.

Exchange 2010:

[PS] C:\Windows\system32>New-MailboxRepairRequest -Mailbox user252 -CorruptionType SearchFolder,FolderView,AggregateCounts,ProvisionedFolder,MessagePtagCN,MessageID
RequestID Mailbox ArchiveMailbox Database Server
--------- ------- -------------- -------- ------
7f499ce3-e Wingtip False Mailbox. WINGTIP-E2K10.Wingti...

Exchange 2013:

[PS] C:\>New-MailboxRepairRequest -Mailbox User532 -CorruptionType SearchFolder,FolderView,AggregateCounts,
ProvisionedFolder,ReplState,MessagePTAGCn,MessageID,RuleMessageClass,RestrictionFolder,FolderACL,
UniqueMidIndex,CorruptJunkRule,MissingSpecialFolders,DropAllLazyIndexes,ImapID,ScheduledCheck,Extension1,
Extension2,Extension3,Extension4,Extension5
Identity Task Detect Only Job State Progress
-------- ---- ----------- --------- --------
a44acf2b {Sea False Queued 0

Upon completion of these repair options, typically the database could be mounted and normal user operations continued.

Support Change for Repaired Databases

Over the course of the last two years, we have reviewed Watson dumps for Information Store crashes that have been automatically uploaded by customers’ servers. The crashes were cause by inexplicable, seemingly impossible store level corruption. The types of store level corruption varied and they come from many different databases, servers, Exchange versions, and customers. In almost all of these cases one significant fact was noted – the repair count recorded on the database was > 0.

When ESEUTIL /p is executed, and a repair to the database is necessary, the repair count is incremented and the repair time is recorded in the header of the database. The repair information stored in the database header will be retained after offline defragmentation . Repair information in the header may be viewed with ESEUTIL /mh.

[PS] C:\Program Files\Microsoft\Exchange Server\Mailbox\First Storage Group>eseutil /mh '.\Mailbox Database.edb'
Extensible Storage Engine Utilities for Microsoft(R) Exchange Server
Version 08.03
Copyright (C) Microsoft Corporation. All Rights Reserved.
Initiating FILE DUMP mode...
Database: .\Mailbox Database.edb
File Type: Database
Format ulMagic: 0x89abcdef
Engine ulMagic: 0x89abcdef
Format ulVersion: 0x620,12
Engine ulVersion: 0x620,12
Created ulVersion: 0x620,12
DB Signature: Create time:04/05/2015 08:39:24 Rand:2178804664 Computer:
cbDbPage: 8192
dbtime: 1059112 (0x102928)
State: Clean Shutdown
Log Required: 0-0 (0x0-0x0)
Log Committed: 0-0 (0x0-0x0)
Streaming File: No
Shadowed: Yes
Last Objid: 4020
Scrub Dbtime: 0 (0x0)
Scrub Date: 00/00/1900 00:00:00
Repair Count: 2
Repair Date: 04/05/2015 08:39:24
Old Repair Count: 0

Last Consistent: (0x0,0,0) 04/05/2015 08:39:25
Last Attach: (0x0,0,0) 04/05/2015 08:39:24
Last Detach: (0x0,0,0) 04/05/2015 08:39:25
Dbid: 1
Log Signature: Create time:00/00/1900 00:00:00 Rand:0 Computer:
OS Version: (6.1.7601 SP 1 NLS 60101.60101)
Previous Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Incremental Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Copy Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Previous Differential Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Full Backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
Current Shadow copy backup:
Log Gen: 0-0 (0x0-0x0)
Mark: (0x0,0,0)
Mark: 00/00/1900 00:00:00
cpgUpgrade55Format: 0
cpgUpgradeFreePages: 0
cpgUpgradeSpaceMapPages: 0
ECC Fix Success Count: none
Old ECC Fix Success Count: none
ECC Fix Error Count: none
Old ECC Fix Error Count: none
Bad Checksum Error Count: none
Old bad Checksum Error Count: none
Operation completed successfully in 0.78 seconds.

Uncorrectable corruption can linger in a repaired database and cause store crashes and server instability, we have changed our support policy to require an evacuation of any Exchange database that persistently has a repair count or old repair count equal to or greater than 1. Moving mailboxes (and public folders) to new databases will ensure that the underlying database structure is good, free from any corruption that might not be corrected by the database repair process, and it helps prevent store crashes and server instability.

Tim McMichael

Hello from Microsoft Ignite

$
0
0

We’re excited to be kicking off Ignite this week in Chicago – doing it on Star Wars day is a nice bonus as the Exchange team is here in full force. It has been great catching up with our customers and Exchange MVPs as we prepare for this action-packed week. At Ignite we will begin sharing details about Exchange Server 2016 and dig deep into features in available in Office 365. You can catch many of us between the sessions on the expo floor showing off the latest demos or just geeking out on all things Exchange.

image

For those of you who made the journey to Ignite we look forward to seeing you in the more than 100 Exchange and Outlook sessions happening this week. If you couldn’t make it to Ignite in person, no problem—all Ignite sessions are being recorded and published for public viewing within a day or two after they happen. See the full list here.

The Meet Exchange Server 2016 session is where we kick-off the first round of details about the next version of Exchange Server. This session will be live streamed to everyone on http://ignite.microsoft.com on Tuesday May 5th at 10:45 AM Central time. The sessions will dig in from here, including a don’t-miss session on the Exchange Preferred Architecture with Ross Smith IV and the first set of guidance on deploying Exchange Server 2016 with Brian Day. Of course this event isn’t just about Exchange Sever 2016 – we are excited to see the dynamic duo of Perry Clarke and Vivek Sharma deliver the next chapter of Behind the Curtain: Running Exchange Online and numerous drill down sessions on features like document collaboration, Outlook, clutter, and many more.

For now we’re off to a busy and exciting week to share the latest news about Exchange and hear from our awesome Exchange community. You can follow the action all week long on Twitter with the hashtags #iammec and #MSIgnite.

Jon Orton

Exchange Server 2016 Architecture

$
0
0

Exchange Server 2016 builds upon the architecture introduced in Exchange Server 2013, with the continued focus goal of improving the architecture to serve the needs of deployments at all scales.

Important: This article contains preliminary information that may be changed prior to final commercial release of the software described herein.

Building Block Architecture

In Exchange Server 2016, there is a single building block that provides the client access services and the high availability architecture necessary for any enterprise messaging environment.

e16
Figure 1: Building Block Architecture

In our continuing quest to improve the product’s capabilities and simplify the architecture and its deployment, we have removed the Client Access server (CAS) role and added the client access services to the Mailbox role. Even without the CAS role, the system maintains loose coupling in terms of functionality, versioning, user partitioning and geographical affinity.

The Mailbox server role now:

  1. Houses the logic to route protocol requests to the correct destination endpoint.
  2. Hosts all of the components and/or protocols that process, render and store the data.

No clients connect directly to the back-end endpoints on the Mailbox server; instead, clients connect client access services and are routed (via local or remote proxy) to the Mailbox server that hosts the active database that contains the user’s mailbox.

Mailbox servers can be added to a Database Availability Group (DAG), thereby forming a high availability unit that can be deployed in one or more datacenters. DAGs in Exchange Server 2016 do have a few specific enhancements:

  1. DatabaseAvailabilityGroupIpAddresses is no longer required when creating a DAG. By default, the failover cluster will be created without an administrative access point, as this is the recommended best practice.
  2. Replay Lag Manager is enabled by default.
  3. Lagged database copy play down can be delayed based on disk latency, thereby ensuring active users are not impacted.
  4. Database failovers times are reduced by 33% when compared to Exchange Server 2013.

Removal of the separate CAS role does not affect how communication occurs between servers. Communication between servers still occurs at the protocol layer, effectively ensuring that every server is an island. For a given mailbox’s connectivity, the protocol being used is always served by the protocol instance that is local to the active database copy.

island
Figure 2: Inter-server communication in Exchange 2016

The load balancer configuration is also not affected by this architectural change. From a protocol perspective, the following will happen:

  1. A client resolves the namespace to a load balanced virtual IP address.
  2. The load balancer assigns the session to a Mailbox server in the load balanced pool.
  3. The Mailbox server authenticates the request and performs a service discovery by accessing Active Directory to retrieve the following information:
    1. Mailbox version (for this discussion, we will assume an Exchange 2016 mailbox)
    2. Mailbox location information (e.g., database information, ExternalURL values, etc.)
  4. The Mailbox server makes the decision to proxy the request or redirect the request to another Mailbox server in the infrastructure (within the same forest).
  5. The Mailbox server queries an Active Manager instance that is responsible for the database to determine which Mailbox server is hosting the active copy.
  6. The Mailbox server proxies the request to the Mailbox server hosting the active copy.

The protocol used in step 6 depends on the protocol used to connect to client access services. If the client request uses HTTP, then the protocol used between the servers is HTTP (secured via SSL using a self-signed certificate). If the protocol used by the client is IMAP or POP, then the protocol used between the servers is IMAP or POP.

Telephony requests are unique. Instead of proxying the request at step 6, the Mailbox server will redirect the request to the Mailbox server hosting the active copy of the user’s database, as the telephony devices support redirection and need to establish their SIP and RTP sessions directly with the Unified Messaging services on the Mailbox server.

e16cc
Figure 3: Client Protocol Connectivity

And yes, the Edge Transport server role will ship in Exchange Server 2016 (and at RTM, to boot!). All the capabilities and features you had in the Edge Transport server role in Exchange Server 2013, remain in Exchange Server 2016.

Why did we remove the Client Access server role?

The Exchange Server 2016 architecture evolves the building block architecture that has been refined over the course of the last several releases. With this architecture, all servers in the Exchange environment (excluding Edge Transport) are exactly the same—the same hardware, the same configuration, and so forth. This uniformity simplifies ordering the hardware, as well as performing maintenance and management of the servers.  

As with Exchange 2010 and in Exchange 2013, we continue to recommend role co-location as a best practice. From a cost perspective, the overall goal is to ensure that the architecture is balanced for CPU and disk. Having separate server roles can result in long-term cost disadvantages as you may purchase more CPU, disk, and memory resources than you will actually use. For example, consider a server that hosts only the Client Access server role. Many servers enable you to add a given number of disks in a very economical fashion—when you are deploying and using that number of disks, the cost is essentially zero. But if you deploy a server role that uses far less than the given number of disks, you’re paying for a disk controller that is either under-used or not used at all.

This architecture is designed to enable you to have fewer physical Exchange servers in your environment. Fewer physical servers mean lower costs for a variety of reasons:

  • Operational costs are almost always higher than the capital costs. It costs more to manage a server over its lifetime than it does to purchase it.
  • You purchase fewer Exchange server licenses. This architecture only requires a license for one Exchange server and one operating system, while breaking out the roles required multiple Exchange server licenses and multiple operating system licenses.
  • Deploying fewer servers has a trickle-down effect across the rest of the infrastructure. For example, deploying fewer physical servers may reduce the total rack and floor space required for the Exchange infrastructure, which in turn reduces power and cooling costs.

This architecture ultimately distributes the load across a greater number of servers than deploying single-role servers because all Mailbox servers also handle client access because:

  • You’re distributing the load across a greater number of physical machines, which increases scalability. During a failure event, the load on the remaining servers only increases incrementally, which ensures the other functions the server is performing aren’t adversely affected.
  • The solution can survive a greater number of Client Access role (or service) failures and still provide service, which increases resiliency.

Key Architectural Improvements

Exchange Server 2016 also includes a number of architectural improvements, beyond the server role consolidation, including search enhancements, document collaboration improvements, and more.

Search Improvements

One of the challenging areas for on-premises environment was the amount of data that was replicated with each database copy in previous releases. In Exchange Server 2016, we have reduced bandwidth requirements between the active copy and a passive copy by 40%. This was accomplished by enabling the local search instance to read data from its local database copy. As a result of this change, passive search instances no longer need to coordinate with their active counterparts in order to perform index updates.

Another area of investment in search has been around decreasing the length of time to return search results, especially in online mode clients like OWA. This is accomplished by performing multiple asynchronous disk reads prior to the user completing the search term, which populates the cache with the relevant information, providing sub-second search query latency for online mode clients.

Document Collaboration

In previous releases of Exchange, OWA included document preview for Office and PDF documents, reducing the need to have a full fidelity client. SharePoint had a similar feature, however it used the Office Web Apps Server to accomplish this capability. Within Office 365, we also leverage Office Web Apps Server to provide this capability, ensuring uniform document preview and editing capability across the suite.

In Exchange Server 2016, we leverage Office Web Apps Server to provide the rich document preview and editing capabilities for OWA. While this was a necessary change to ensure a homogenous experience across the Office Server suite, this does introduce additional complexity for environments that don’t have Office Web Apps Server.

The next generation of Office Web Apps Server will not be supported for co-location with Exchange. Therefore, you must deploy a separate server farm infrastructure. This infrastructure will require unique namespaces, and will require session affinity to be maintained at the load balancer.

While Exchange supports an unbound namespace model, the Office Web Apps Server will require a bound namespace for each site resilient datacenter pair. However, unlike the bound namespace model within Exchange, Office Web Apps Server will not require any namespace changes during a datacenter activation.

oos
Figure 4: Office Web Apps Server Connectivity

Extensibility

Office 365 introduced the REST APIs (Mail, Calendar, and Contact APIs), and now these APIs are available in Exchange Server 2016. The REST APIs simplify programming against Exchange by providing a familiar syntax that is designed with openness (e.g., open standards support JSON, OAUTH, ODATA) and flexibility (e.g., granular, tightly scoped permission to access user data). These APIs allow developers to connect from any platform, whether it be web, PC, or mobile. SDKs exist for.NET, iOS, Android, NodeJS, Ruby, Python, Cordova, and CORS for use in single page JavaScript web apps.

What about Exchange Web Services (EWS)? All existing applications that leverage EWS will continue to work with Exchange Server 2016. We are, however, focusing new platform investments on the REST APIs and the apps for Office extensibility model. We expect to make significantly fewer investments in EWS so that we can focus our resources on investing in a single modern API that will, over time, enable most of the scenarios that our partners currently use EWS.

Outlook Connectivity

Introduced in Exchange Server 2013 Service Pack 1, MAPI/HTTP is the new standard in connectivity for Outlook. In Exchange Server 2016, MAPI/HTTP is enabled by default. In addition, Exchange Server 2016 introduces per-user control over this connectivity model, as well as, the ability to control whether the protocol (and Outlook Anywhere) is advertised to external clients.

Note: Exchange Server 2016 does not support connectivity via the MAPI/CDO library. Third-party products (and custom in-house developed solutions) need to move to Exchange Web Services (EWS) or the REST APIs to access Exchange data.

Coexistence with Exchange Server 2013

In Exchange Server 2013, the Client Access server role is simply an intelligent proxy that performs no processing/rendering of the content. That architectural tenet paid off in terms of forward coexistence. When you introduce Exchange Server 2016, you do not need to move the namespace. That’s right, the Exchange Server 2013 Client Access infrastructure can proxy the mailbox requests to the Exchange 2016 servers hosting the active database copy! For the first time ever, you get to decide when you move the namespace over to the new version. And not only that, you can even have load balancer pools contain a mix of Exchange Server 2013 and Exchange Server 2016. This means you can do a one-for-one swap – as you add Exchange 2016 servers, you can remove Exchange 2013 servers.

The Preferred Architecture

During my session at Microsoft Ignite, I revealed Microsoft’s preferred architecture (PA) for Exchange Server 2016. The PA is the Exchange Engineering Team’s best practice recommendation for what we believe is the optimum deployment architecture for Exchange 2016, and one that is very similar to what we deploy in Office 365.

While Exchange 2016 offers a wide variety of architectural choices for on-premises deployments, this architecture is our most scrutinized one ever. While there are other supported deployment architectures, they are not recommended.

The Exchange 2016 PA is very similar to the Exchange 2013 PA. A symmetrical DAG is deployed across a datacenter pair with active database copies distributed across all servers in the DAG. Database copies are deployed on JBOD storage, with four copies per-disk. One of the copies is a lagged database copy. Clients connect to a unified namespace that is equally distributed across the datacenters in the site resilient pair.

However, the Exchange 2016 PA differs in the following ways:

  1. Exchange’s unbound namespace model is load balanced across the datacenters in a layer 7 configuration that does not leverage session affinity.
  2. An Office Web Apps Server farm is deployed in each datacenter, with each farm having a unique namespace (bound model). Session affinity is managed by the load balancer.
  3. The DAG is deployed without an administrative access point.
  4. The commodity dual-socket server hardware platform contains 20-24 cores and up to 196GB of memory, and a battery-backed write cache controller.
  5. All data volumes are formatted with ReFS.

As we get closer to release, we'll publish a complete Exchange 2016 Preferred Architecture article.

Summary

Exchange Server 2016 continues in the investments introduced in previous versions of Exchange by reducing the server role architecture complexity, aligning with the Preferred Architecture and Office 365 design principles, and improving coexistence with Exchange Server 2013.

These changes simplify your Exchange deployment, without decreasing the availability or the resiliency of the deployment. And in some scenarios, when compared to previous generations, the PA increases availability and resiliency of your deployment.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Public Folder team is looking for feedback!

$
0
0

Here at Microsoft Ignite in Chicago, the Public Folder team has been asking customers to fill out a survey to help them learn more about public folder use and get feedback on some possible future features that they are thinking of. If you are up for it, there is a survey they have created that will not take too long to fill out and let them know what you think! The survey will be live for about 2 weeks.

Exchange Public Folders – Ignite 2015 Survey

Thank you for your time!

Nino Bilic

Exchange @ Ignite 2015

$
0
0

clip_image002We were joined by thousands of Exchange and Office 365 customers at Microsoft Ignite in Chicago last week. It was full of announcements, new content, and opportunities for the Exchange community to reconnect. For those who were on site we thank you for your enthusiastic participation and engaging questions. We spent time geeking out on everything from architecture designs to mobile clients, and everything in between. It was enough to drive creativity to new levels – in fact, in one discussion Tim McMichael broke out in singing “Everything is awesome... Everything is cool when you deploy a DAG…” Sounds like a new Exchange hit in the making to me!

If you weren’t able to join us, no worries–every session at Ignite was recorded and published to Channel 9. Here’s a recap of the top Exchange-related news of the week, and links to session recordings where you can get all the details.

Highlights

EXCHANGE SERVER 2016

We introduced Exchange Server 2016, our next on-premises Exchange Server release. Exchange Server 2016 includes exciting new features for on-premises customers, including a new approach to document collaboration, enhanced search, and a modern Outlook Web App experience to name a few. The server also has a robust architecture that is cloud-inspired and proven. We demoed elements of Exchange Server 2016 for the first time ever in the Meet Exchange Server 2016 session. It is on track to be released later this calendar year following a public beta during the summer.

Meet Exchange Server 2016

Exchange Server
Preferred Architecture

Deploying Exchange Server 2016

clip_image002

clip_image004

clip_image006

OUTLOOK

The Office 2016 public preview was made available last week, allowing everyone to gain access to the new version of desktop Outlook. Outlook 2016 complements many of the features in Exchange Server 2016, but of course it is also unlocks new capabilities in Office 365. We also provided a close look at the new Outlook Mail and Outlook Calendar apps on Windows 10 mobile devices. Outlook for Mac 2016 was also demoed and discussed in detail at Ignite. Check out these sessions to get caught up on all of the Outlook news.

Desktop Outlook:
Evolved and Redefined

Outlook on Mobile Devices

Meet the new Outlook
for Mac 2016

DEPLOYMENT OPTIONS

We also made two announcements that provide new deployment options for Exchange Server 2013. First we announced support for deploying Exchange Server 2013 on Azure IaaS VMs for production use with Azure Premium storage. We have added this option to provide customers deployment flexibility, but we continue to recommend deploying Exchange server on physical hardware as the best and most cost effective way to run Exchange outside of Office 365. We also announced that Exchange 2013 will support use of Hyper-V dynamic VHDX (not VHD) disks. Previous guidance required use of fixed virtual disks for support. Refer to these sessions for more details on these topics.

Exchange on IaaS:
Concerns, Tradeoffs, and Best Practices

Exchange Storage for Insiders: It's ESE

Content

Ignite included a dizzying number of sessions which are all published on Channel 9. To help you find the most relevant content for Exchange–both on-premises and online–we have complied this table of quick links. You can view session recordings over the web, download them to your phone or tablet, and access PowerPoint decks used in the sessions. Happy learning.

Exchange Server:

New Feature Scenarios:

Office 365:

Outlook:

Compliance:

Office Developer:

See you next year

We had a fantastic time in Chicago, and we’re already thinking of ways to make year’s event even better. Mark your calendars for May 9-13, 2016 and we’ll see you there!

image

Jon Orton

Exchange Online Advanced Threat Protection is now available

Parsing the Admin Audit Logs with PowerShell

$
0
0

One of the nice features introduced in Exchange 2010 was Admin Audit Logging. Concerned administrators everywhere rejoiced! This meant that a record of Exchange PowerShell activity, organization wide, was now saved and searchable.

Administrators could query the Admin Audit Log, using the Search-AdminAuditLog Cmdlet, and reveal any CmdLets invoked, the date and time they were executed and the identity of the person who issued the commands. However, the results of the search are a bit cryptic and it didn’t allow for easy bulk manipulation like parsing, reporting or archiving.

The main complaint I heard from customers went something like this: “It’s great that I can see what Cmdlets are run, and what switches were used… but I can’t see the values of those switches!” Well, as it turns out, that data has actually been there the whole time; it’s just been stored in a non-obvious manner.

Consider a scenario where you’ve been informed that many, or all, of the mail users in your organization are reporting the wrong phone number listed in the Global Address List. It seems everyone has the same phone number now, let’s say 867-5309.

image

Because your organization uses Office 365 Directory Synchronization (DirSync), you know the change had to occur within your on-premises organization and was then subsequently synchronized to Office 365. The Search-AdminAuditLog Cmdlet must, therefore, be run on-premises.

It’s important to remember this concept. If you were investigating a Send Connector configuration change for your Office 365 – Exchange Online tenant, a search would need to be performed against your tenant instead. But let’s get back to our Jenny Phone number issue.

You know that the change was made on the 6th so you restrict the search to that date.

Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate 4/6/2015 11:20:00 AM"

image
(click on screenshots that might be too small to read)

Reviewing the output, you find that Tommy executed the Set-User Cmdlet but no indication as to what parameter(s) or values were used? What exactly did Tommy run? Where are the details!?

Then, you spot a clue. The ‘CmdletParameters’ and ‘ModifiedProperties’ are enclosed with braces { }. Braces are normally indicative of a hash table. You know a hash table is simply a collection of name-value pairs. You wonder if you’re only seeing the keys or a truncated view in this output. Could more details remain hidden?

Digging a bit deeper, you decide to store the search results to an array, named $AuditLog, which will allow for easier parsing.

$AuditLog = Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"

image

Next, you isolate a single entry in the array. This is done by calling the array variable and adding [0] to it. This returns only the first entry in the array.

$AuditLog[0]

image

To determine the object type of the ‘CmdletParameter’, you use the GetType() method and sure enough, it’s an array list.

$AuditLog[0].CmdletParameters.GetType()

image

Finally, you return the CmdletParameters array list to reveal all the details needed to conclude your investigation.

$AuditLog[0].CmdletParameters

image

Considering there are hundreds or thousands of entries in the audit log, how would you generate a full list of all the objects Tommy has changed? Or better yet, report all objects that he changed where ONLY the ‘Phone’ attribute was modified?

Fortunately, you don’t have to expend too much time on this. My colleague, Matthew Byrd recognized this exact problem and he wrote a PowerShell Script that does all the aforementioned steps for you and then some!

The script can be downloaded from TechNet Gallery and you’ll find it’s well documented and commented throughout. The script includes help (get-help .\Get-SimpleAuditLogReport.ps1) and can be used within Exchange 2010, Exchange 2013 and Office 365 - Exchange Online environments. That said, I’m not going to dissect the script. Instead, I will demonstrate how to use it.

The script simply manipulates or formats the results of the Search-AdminAuditLog query into a much cleaner and detailed output. You form your Search-AdminAuditLog query, then pipe it through the Get-SimpleAuditlogReport script for formatting and parsing.

Here are some usage examples:

This first example will output the results to the PowerShell Screen.

$Search = Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"
$Search | C:\Scripts\Get-SimpleAuditLogReport.ps1 –agree

image

You can see that the Get-SimpleAuditLogReport.ps1 script has taken results stored in the $Search variable and attempted to rebuild the original Command run. It isn’t perfect but the goal of the script is to give you a command that you could copy and paste into an Exchange Shell Window and it should run.

Should you expect a lot of data to be returned or wish to save the results for later use, this example will save the results to a CSV file.

Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"| C:\Scripts\Get-SimpleAuditlogReport.ps1 -agree | Export-CSV -path C:\temp\auditlog.csv

image

This example uses one of my favorite output objects, Out-GridView, to display the results. This is a nice hybrid CSV/PowerShell output option. The results shown in the Out-GridView window is sortable and filterable. You can select, copy/paste the filtered results into a CSV file. Meanwhile the raw, unfiltered, results are saved to a CSV file for future later use or archival.

Search-AdminAuditLog -StartDate "4/6/2015 12:00:00 AM" -EndDate "4/6/2015 11:20:00 AM"| C:\Scripts\Get-SimpleAuditlogReport.ps1 -agree | Out-GridView –PassThru | Export-Csv -Path c:\temp\auditlog.csv

image

Here I restrict it to only commands Tommy ran and remove anything that he ran against the discovery mailbox since it is a system mailbox.

image

Copy/Paste the filtered results into a CSV file. The Out-GridView has no built in export or save feature. To save your filtered results, click on an entry and then ctrl-a / ctrl-c to select all and copy results to your clipboard. Finally, in Excel, paste and you’re done.

image

There you have it. Admin Audit Log Mastery – CHECK! Thanks to Matthew Byrd’s wonderful script you can get the most out of your audit logs. Check it out over at TechNet.

Brandon Everhardt


Released: June 2015 Exchange Cumulative Update and Update Rollups

$
0
0

The Exchange team is announcing today the availability of our latest quarterly updates for Exchange Server 2013 as well as updates for Exchange Server 2010 Service Pack 3 and Exchange Server 2007 Service Pack 3.

Cumulative Update 9 for Exchange Server 2013 and UM Language Packs are now available on the Microsoft Download Center. Cumulative Update 9 contains the latest set of fixes and builds upon Exchange Server 2013 Cumulative Update 8. The release includes fixes for customer reported issues, minor product enhancements and previously released security bulletins. A complete list of customer reported issues resolved can be found in Knowledge Base Article KB3049849. Customers running any previous release of Exchange Server 2013 can move directly to Cumulative Update 9 today. Customers deploying Exchange Server 2013 for the first time may skip previous releases and start their deployment with Cumulative Update 9 directly.

For the latest information and product announcements please read What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Cumulative Update 9 may include Exchange related updates to the Active Directory schema and Exchange configuration when compared with the version of Exchange 2013 you have currently deployed. Microsoft recommends all customers test the deployment of a cumulative update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., CU9) or the prior (e.g., CU8) Cumulative Update release.

Also being released today are, Exchange Server 2010 Service Pack 3 Update Rollup 10 (KB3049853) and Exchange Server 2007 Service Pack 3 Update Rollup 17 (KB3056710). These releases provide minor improvements and fixes for customer reported issues. Update Rollup 10 is the last scheduled release for Exchange Server 2010. Both Exchange Server 2010 and Exchange Server 2007 are in extended support and will receive security and time zone fixes on-demand on a go-forward basis.

Note: KB articles mentioned may not be fully available at the time this post was published.

The Exchange Team

Exchange 2013 Calculator Updates

$
0
0

Today, we released an updated version of the Exchange 2013 Server Role Requirements Calculator.

In addition to numerous bug fixes, this version includes new functionality: CPU utilization table, ReplayLagManager support, MaximumPreferredActiveDatabases support, Restore-DatabaseAvailabilityGroup scenario support, and guidance on sizing recommendations. You can view what changes have been made or downloadthe update directly. For details on the new features, read on.

CPU Utilization Table

The Role Requirements tab includes a table that outlines the expected theoretical CPU utilization for various modes:

  • Normal Run Time (where the active copies are distributed according to ActivationPreference=1)
  • Single Server Failure (redistribution of active copies based on a single server failure event)
  • Double Server Failure (redistribution of active copies based on a double server failure event)
  • Site Failure (datacenter activation)
  • Worst Failure Mode (in some cases, this value will equal one of the previous scenarios, it could also be a scenario like Site Failure + 1 server failure; the worst failure mode is what is used to calculate memory and CPU requirements)

Here’s an example:

Calc1

In the above scenario, the worst failure mode is a site failure + 1 additional server failure (since this is a 4 database copy architecture).

ReplayLagManager Support

ReplayLagManager is a new feature in Exchange Server 2013 that automatically plays down the lagged database copy when availability is compromised. While it is disabled by default, we recommend it be enabled as part of the Preferred Architecture.

Prior to version 7.5, the calculator only supported ReplayLagManagerin the scripts created via the Distribution tab (the Role Requirements and Activation Scenarios tabs did not support it). As a result, the calculator did not factor the lagged database copy as a viable activation target for the worst failure mode. Naturally, this is an issue because sizing is based on the number of active copies and the more copies activated on a server, the greater the impact to CPU and memory requirements.

In a 4-copy 2+2 site resilient design, with the fourth copy being lagged, what this meant in terms of failure modes, is that the calculator sized the environment based on what it considered the worst case failure mode – Site Failure (2 HA copies lost, only a single HA copy remaining). Using the CPU table above as an example, calculator versions prior to 7.5 would base the design requirements on 18 active database copies (site failure) instead of 22 active database copies (3 copies lost, lagged copy played down and being utilized as the remaining active).

ReplayLagManageris only supported (from the calculator perspective) when the design leverages:

  • Multiple Databases / Volume
  • 3+ HA copies

MaximumPreferredActiveDatabases Support

Exchange 2010 introduced the MaximumActiveDatabasesparameter which defines the maximum number of databases that are allowed to be activated on a server by BCS. It is this value that is used in sizing a Mailbox server (and is defined the worst failure mode in the calculator).

Exchange 2013 introduced an additional parameter, MaximumPreferredActiveDatabases. This parameter specifies a preferred maximum number of databases that the Mailbox server should have. The value of MaximumPreferredActiveDatabasesis only honored during best copy and server selection (phases 1 through 4), database and server switchovers, and when rebalancing the DAG.

With version 7.5 or later, the calculator recommends setting MaximumPreferredActiveDatabases when there are four or more total database copies. Also, the Export DAG List form exposes the MaximumPreferredActiveDatabasessetting and createdag.ps1 sets the value for the parameter.

Restore-DatabaseAvailabilityGroup Scenario Support

In prior releases, the Distribution tab only supported the concept of Fail WAN, which allowed you to simulate the effects of a WAN failure and model the surviving datacenter’s reaction depending on the location of the Witness server. However, Fail WAN did not attempt to shrink the quorum, so if you attempted to fail an additional server you would end up in this condition:

calcFailWAN

With this version 7.5 and later, the calculator adds a new mode: Fail Site.  When Fail Site is used, the datacenter switchover steps are performed (and thus the quorum is shrunk, alternate witness is utilized, if required, etc.) thereby allowing you to fail additional servers.  This allows you to simulate the worst failure mode that is identified in the Role Requirements and Activation Scenarios tabs.

calcFailSite

Note: In order to recover from the Fail Site mode, you must click the Refresh Database Layout button.

Sizing Guidance Recommendations

As Jeff recently discussed in Ask The Perf Guy: How Big Is Too Big?, we are now providing explicit recommendations on the maximum number of processor cores and memory that should be deployed in each Exchange 2013 server. The calculator will now warn you if you attempt a design that exceeds these recommendations.

cpu

As always, we welcome your feedback.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Ask the Perf Guy: How big is too BIG?

$
0
0

We’ve seen an increasing amount of interest lately in deployment of Exchange 2013 on “large” servers. By large, I mean servers that contain significantly more CPU or memory resources than what the product was designed to utilize. I thought it might be time for a reminder of our scalability recommendations and some of the details behind those recommendations. Note that this guidance is specific to Exchange 2013 – there are many architectural differences in prior releases of the product that will impact scalability guidance.

In a nutshell, we recommend not exceeding the following sizing characteristics for Exchange 2013 servers, whether single-role or multi-role (and you are running multi-role, right?).

Recommended Maximum CPU Core Count

24

Recommended Maximum Memory

96 GB

Note: Version 7.5 and later of the Exchange Server 2013 Role Requirements Calculator aligns with this guidance and will flag server configurations that exceed these guidelines.

As we have mentioned in various places like TechNet and our Preferred Architecture, commodity-class 2U servers with 2 processor sockets are our recommended server type for deployment of Exchange 2013. The reason for this is quite simple: we utilize massive quantities of these servers for deployment in Exchange Online, and as a result this is the platform that we architect for and have the best visibility into when evaluating performance and scalability.

You might now be asking the fairly obvious follow up question: what happens if I ignore this recommendation and scale up?

It’s hard, if not impossible, to provide a great answer to this question, because there are so many things that could go wrong. We have certainly seen a number of issues raised through support related to scale-up deployments of Exchange in recent months. An example of this class of issue appears in the “Oversizing” section of Marc Nivens’ recent blog article on troubleshooting high CPU issues in Exchange 2013. Many of the issues we see are in some way related to concurrency and reduced throughput due to excessive contention amongst threads. This essentially means that the server is trying to do so much work (believing that it has the capability to do so given the massive amount of hardware available to it) that it is running into architectural bottlenecks and actually spending a great deal of time dealing with locks and thread scheduling instead of handling transactions associated with Exchange workloads. Because we architect and tune the product for mid-range server hardware as described above, no tuning has been done to get the most out of this larger hardware and avoid this class of issues.

We have also seen some cases in which the patterns of requests being serviced by Exchange, the number of CPU cores, and the amount of physical memory deployed on the server resulted in far more time being spent in the .NET Garbage Collection process than we would expect, given our production observations and tuning of memory allocation patterns within Exchange code. In some of these cases, Microsoft support engineers may determine that the best short-term workaround is to switch one or more Exchange services from the Workstation Garbage Collection mode to Server Garbage Collection mode. This allows the .NET Garbage Collector to manage memory more efficiently but with some significant tradeoffs, like a dramatic increase in physical memory consumption. In general, each individual service that makes up the Exchange server product has been tuned as carefully as possible to be a good consumer of memory resources, and wherever possible, we utilize the Workstation Garbage Collector to avoid a dramatic and typically unnecessary increase in memory consumption. While it’s possible that adjusting a service to use Server GC rather than Workstation GC might temporarily mitigate an issue, it’s not a long-term fix that the product group recommends. When it comes to .NET Garbage Collector settings, our advice is to ensure that you are running with default settings and the only time these settings should be adjusted is with the advice and consent of Microsoft Support. As we make changes to Exchange through our normal servicing rhythm, we may change these defaults to ensure that Exchange continues to perform as efficiently as possible, and as a result, manual overrides could result in a less optimal configuration.

As server and processor technology changes, you can expect that we will make adjustments to our production deployments in Exchange Online to ensure that we are getting the highest performance possible at the lowest cost for the users of our service. As a result, we anticipate updating our scalability guidance based on our experience running Exchange on these updated hardware configurations. We don’t expect these updates to be very frequent, but change to hardware configurations is absolutely a given when running a rapidly growing service.

It’s a fact that many of you have various constraints on the hardware that you can deploy in your datacenters, and often those constraints are driven by a desire to reduce server count, increase server density, etc. Within those constraints, it can be very challenging to design an Exchange implementation that follows our scalability guidance and the Preferred Architecture. Keep in mind that in this case, virtualization may be a feasible option rather than a risky attempt to circumvent scalability guidance and operate extremely large Exchange servers. Virtualization of Exchange is a well understood, fairly common solution to this problem, and while it does add complexity (and therefore some additional cost and risk) to your deployment, it can also allow you to take advantage of large hardware while ensuring that Exchange gets the resources it needs to operate as effectively as possible. If you do decide to virtualize Exchange, remember to follow our sizing guidance within the Exchange virtual machines. Scale out rather than scale up (the virtual core count and memory size should not exceed the guidelines mentioned above) and try to align as closely as possible to the Preferred Architecture.

When evaluating these scalability limits, it’s really most important to remember that Exchange high availability comes from staying as close to the product group’s guidance and Preferred Architecture as possible. We want you to have the very best possible experience with Exchange, and we know that the best way to achieve that is to deploy like we do.

Jeff Mealiffe
Principal PM Manager
Office 365 Customer Experience

Booking Delegation Vs. Classic Delegation

$
0
0

Calendar delegation can be assigned in two different ways, each for a specific scenario. However, any mailbox, usually rooms, can be assigned both and this causes confusion when managing delegation of rooms or resources. What’s the difference and why?

Classic Delegation

Classic delegation has been around forever and used when a manager wants one or more people to manage their calendar. For example, a CEO wants their assistant(s) to manage the CEO’s calendar. In this case, the CEO using Outlook or OWA would assign the delegates to the calendar. The Tenant/Org admin has no interaction; this is all controlled by the end users.

Classic Delegation assignment works by:

  • Adding Editor permission on the calendar to the delegates
  • Granting Send As permissions to the delegates
  • Creating a hidden transport rule which redirects the meetings to the delegates, red box below.

All this is done by the client. The Tenant/Org admin has no involvement.

Outlook example:

image

OWA Calendar delegation assignment example:

image

In both cases, classic delegation is completely controlled by end users assigned from the clients.

Booking or Resource Delegation

This feature is designed to allow the Tenant/Org admin to manage all room and/or resource delegation to specific people to manage, no end user configuration involved. The Tenant/Org admin has total control of who the delegates are for all the rooms and resources.

image

The same calendar permissions and Send As is created but there is no hidden rule for booking delegation, the Resource Booking Agent takes care of redirecting the meetings to the assigned booking delegates.

The problem?

The problem happens when a room has been configured for classic delegation, then the Tenant/Org admin reassigned/modified delegation from the booking delegation portal. The booking delegation assignment is successful but the classic delegation hidden rule still exists, which will fire first and continue to be redirected to the classic delegates overriding the booking delegates.

Solution

The easiest solution is to simply logon to the room with the classic delegation and deselect the checkbox that redirects calendaring items to the delegate.

image

Both Booking delegation and Classic Delegation use the same folder permissions and Send As rights.

Eric Hartmann

Deep Sixing PST Files

$
0
0

A little over two years ago we wrote about removing PSTs from your organization and gave you a tool to assist you with that endeavor. .PST, Time to Walk the Plank. Since then, we’ve updated the tool with some new features and functionality. In this blog, we are going to give you:

  1. More reasons to get rid of PST files.
  2. What to do with the data in the PST files.
  3. How to move the PST data to its new location.

More Reasons to get rid of PST files.

Without further ado, let’s talk a bit about why you want to get rid of PST files.

Corporate Security and Compliance.

  1. PST files are created by users and are unmanaged data. Most organizations have very little insight into where PSTs are created, how the users retain the files, where the files are kept, and exactly how much data is in those files.
  2. PST files usurp your well defined data retention policies. Exchange Data Retention Policies do not apply to data residing within PST files, and you cannot set retention tags in PST files.
  3. Outside of the maximum file size for a PST, currently 50 Gigabytes, they are not limited in size.
  4. A PST from one machine or user can easily be opened by another user off the network. They can be stored on portable media that can be lost or stolen. Thumb Drives, USB storage media, DVD, personal cloud storage. This is a data leakage risk, even if they are password protected.
  5. Depending on your corporate backup strategy, they may not be backed up anywhere. This can result in data loss if the user’s computer has a disk issue.
  6. Data within PST files is not discoverable with built in Exchange Discovery tools. This creates a complex discovery issue for legal departments which can lead to a very expensive discovery process.
  7. More about the importance of Records Management can be found in a very old Exchange Team Blog here.

User Experience

  1. Users with multiple PST files have a disjointed experience when switching machines or using OWA. Because a PST only resides on a single computer, users are limited on how they can interact with that data. It today’s mobile workforce, with smartphones, tablets, laptops and workstations, this can result in data being strewn across devices and the user not having the data they need when they need it.
  2. Outlook Rules that work with a PST file will only work on the computer where that PST resides and Outlook must be running for the rule to fire properly.
  3. Disk Space. PSTs can be up to 50 GBs and users can create multiple different PSTs which can impact disk space on the workstation or device.

Great, you’ve talked me into it, HOW do I get rid of PST files?

Glad you asked, but before we get started you have to make some decisions.

Where do you want to put the data?

Once you decide where you want to put the data, we can work on methods to get it there?

What to do with the data in your PST files.

Now that you have decided it is time for PSTs to Walk the Plank; let’s decide where to put all that data. In general, you have 4 options.

First, you have to decide WHERE you want to store this data?

  1. User Mailbox
  2. Archive Mailbox Locally
  3. Archive Mailbox in the Cloud
  4. Delete, Delete, Delete

Option 1: Keep it simple. Given the mailbox size capabilities of Exchange 2013, you could pull all that mail back into the users’ mailboxes.

Advantages:

  • Easy. Pulling all that PST data back into the mailbox is very easy using the PST capture tool—as long as you have the space. Also keeping it in a single mailbox reduces the management complexity as each user only has a single mailbox associated with their account.
  • Discoverability of data. Mailboxes are searchable via native Exchange Discovery Tools.
  • Manageability of data. Mailbox data is subject to data retention policies.
  • No separation of data. Exchange 2013 and later is designed to handle large mailboxes (100+ GB in size). From a client perspective, all clients can access the data and you can control the OST size with the Outlook 2013 Sync Slider.

Disadvantages:

  • Cost. Increasing the mailbox sizes for all users and keeping multiple copies of that data on server storage can get quite costly if you are not deploying commodity hardware as recommended in the Preferred Architecture.
  • Overhead: Database and log file disk space management. During the ingestion phase, the database and log files will grow quickly if the process is not managed.
  • Large OST footprint with legacy clients. If you are deploying legacy Outlook versions (2010 or prior) or using the Outlook:Mac client, you cannot control the size of the cached mailbox.

Option 2: Keep it Separate. With Exchange 2013 you can create an archive mailbox and import all the PST data into it.

Advantages:

  • Discoverability of data. Archive Mailboxes are searchable via the native Exchange Discovery tools.
  • Archive Mailbox Data is Manageable. Aging and retention policies apply to both the Mailbox and Archive Mailbox and not to PSTs.
  • Security. The data is secure on your Exchange servers, not on users’ local drives.
  • Safety. The data’s resiliency is handled by continuous replication.
  • Access. The Archive Mailbox is accessible by the user on most machines where they have mailbox access, this includes OWA (access varies by client. Outlook 2007 SP2 with February 2011 cumulative update and above for Outlook for PC).

Disadvantages:

  • Storage costs. You are still importing all of that data into your Enterprise Mail environment. This storage includes, disk space for the data, copies of the data, and backups of the data.

Option 3: Keep it in Office 365. With Exchange 2013 and Office 365, you can put the archive in the cloud—even if the primary mailbox is still On-Premises.

Advantages:

  • Discoverability of data. Archive Mailboxes are searchable via the native Exchange Discovery tools.
  • Reduce Storage costs. With an Enterprise CAL Suite, Archive Mailboxes are included at no extra cost. Enterprise CAL Suite Details. Think about it, Unlimited Archive storage space for your users.
  • Archive Mailbox Data is Manageable. You can apply aging and retention policies.
  • Security. The data is secure on Office 365 servers, not on users’ local drives.
  • Safety. The data is redundant and safe in the Office 365 cloud.
  • Access. The Archive Mailbox is accessible by the user on most machines where they have mailbox access, this includes OWA (access varies by client. Outlook 2007 SP2 with February 2011 cumulative update and above for Outlook for PC).
  • Reduced Storage Hardware. Exchange Online Archive uses Office Online storage and negates the need to build additional storage for the ingested PSTs.
  • Reduced Management costs. Once the Online Archives are set up, the back end management of the cloud storage is done by Microsoft.

Disadvantages:

  • Initial Setup. Yes, there is a bit more setup involved in going to the cloud, but that effort is going to be considerably less than architecting and adding the storage for all of your PST files.

Option 4: Delete them all. Yes, this is actually an option.

Advantages:

  • Really easy to find and delete PST files.

Disadvantages:

  • Lost data. All those PST files were kept for a reason. Some of that data could be business critical.
  • Angry users. Do I need to explain this point? Torches and Pitchforks anyone?

Those are your main options for addressing PST data. In Part 3, we’ll discuss some methods to get the data to the desired location.

How to move the PST data to its new location

This time, I’ll talk about a strategy.

  1. Announce the Policy.
  2. Lock the PSTs
  3. Move the data and remove PSTs from workstations.

Step 1: Announce the Policy

This is the MOST IMPORTANT STEP.

Once you have determined your data retention and PST storage plan, you need to announce it. This message should come from the policy makers, Legal, Business, etc, NOT the IT department. Pointing to the User Experience issues from Part 1 of this series will go a long way toward easy user acceptance and adoption.

Step 2: Lock the PST files

No sense in getting rid of them if the users can keep putting them back. You can use the following registry keys \ Outlook Policies to control the behavior before, during, and after you move everything to its final location.

  1. DisablePST – Prevents users from adding PSTs to Outlook profiles.
  2. PSTDisableGrow –Prevents user from adding content to PST files.
  3. DisableCrossAccountCopy – Prevents users from copying data into or out of PST files.
  4. DisableCopyToFileSystem – Prevents users from copying mail to a File System.

Further details on the options to keep users from adding data to PST files -  http://technet.microsoft.com/en-us/library/ff800883.aspx.

Step 3: Moving the current PST file data. Three options.

  1. Automate the Move with PST Capture.
  2. Allow your users to move their data.
  3. Upload or ship your PST files to Microsoft and have us import it for you.

Option 1: Automate the Move with the PST Capture Tool.

The PST Capture tool will discover all PST files in your organization. It will gather them into a consolidated location. It will import them to the location you desire, Mailbox, Archive Mailbox, Cloud Archive Mailbox.

Option 2: Allow your users to move their data into their Mailbox or Archive Mailbox.

Wait? What? You just told me about that amazing automated tool, why would I make my users manually import their data?

The tool is great, but it does have some limitations that may not work for all customers.

  1. All or nothing. The tool imports all mail from a PST with no filters for content or age.
  2. Agent on every desktop. The tool requires an agent install on every desktop.
  3. Outlook must be shut down for the PST Capture to finalize the PST move.

While this option works, it creates a significant amount of administrative overhead. You have to manage the process:

  1. The messaging to the users about how and where to move the data.
  2. The issues around database and log management if they are uploading On-Premises
  3. The timelines for each of the settings for locking the PST.
  4. Following up with the users.
  5. Following up with the users.
  6. Following up with the users.

In short, yes, this can work, but will drag out the process significantly.

Option 3: Have Microsoft put your PSTs in your Office 365 Mailbox or Archive Mailboxes for you.

Yes, you heard that right, Microsoft will put them in the cloud for you. This is a great option if you have a large amount of data to upload. You can directly upload the PST files via Azure AZCopy Tool OR ship the disks to us. Typically, we recommend the disk shipping service for data over 10TB.

REQUIREMENTS

  1. The PST files. For either drive shipping or network upload, you need to collect them so that they can be copied into the hard drives or uploaded to the cloud storage destination.
  2. Office 365 tenant with active users and mailboxes for all of the users who will have data imported. This option is currently only available if the mailbox is already in Office 365.
  3. PST Mapping File.
  4. A User Account with Mailbox Import Export admin role.

ADDITIONAL REQUIREMENTS FOR THE SHIPPING OPTION ONLY

  1. Hard Drive: Only 3.5 inch SATA II/III hard drives are supported for use with the PST Import service. Hard drives larger than 4TB are not supported. For import jobs, only the first data volume on the drive will be processed. The data volume must be formatted with NTFS. You can attach a SATA II/III disk externally to most computers using a SATA II/III USB Adapter.
  2. BitLocker encryption: All data stored on hard drives must be encrypted using BitLocker with encryption keys protected with numerical passwords. The Office 365 drive preparation tool will help with the encryption. This can be found in the Office 365 Admin Center under the Import tab.
  3. A carrier account for shipping if drive shipping is your preferred method.

Details of the Office 365 Import service are here:

Other relevant articles:

Mike Ferencak
Senior Premier Field Engineer

Announcing Exchange Server 2016 Preview!

$
0
0

We’re excited to announce that Exchange Server 2016 Preview is now available for download. At Ignite, we introduced Exchange Server 2016 and demonstrated some of its capabilities. Now you can install the bits yourself and get hands-on experience with the newest member of the Exchange family. We’re eager to hear your feedback as we progress toward a final release later this year.

This version of Exchange is special because it was born in the cloud. From the depths of the mailbox store to the most visible parts of the Outlook web UI, the bits that make up Exchange 2016 are already in use across millions of mailboxes in Office 365. For the past several months we’ve been working to package up these capabilities and deliver them on-premises. This preview milestone is an important step in that process, and we’re excited to include the worldwide Exchange community in the journey.

Let’s begin by joining Greg Taylor and Jeremy Chapman for an episode of Office Mechanics that takes a closer look at what’s new in Exchange 2016, with a focus on IT-related features.

 

Here’s a sampling of some key improvements that you can explore as you try out this Preview release. All of these enhancements are driven by our experience running Exchange at scale in a highly available way in Office 365. We believe it is vital to bring innovation from our datacenter to yours.

Simplified architecture

The architecture of Exchange 2016 is an evolution of what was delivered in Exchange 2013, reflecting the best practices of the Exchange Preferred Architecture, and mirroring the way we deploy Exchange in Office 365. The Client Access and Mailbox server roles have been combined, providing a standard building block for building your Exchange environment. Coexistence with Exchange 2013 is simplified, and namespace planning is easier.

Improved reliability

Keeping email up and running is a high-visibility responsibility for IT, so we’ve made investments that help you run Exchange with greater reliability and less effort. Based on Office 365 learnings, we’ve already shipped hundreds of reliability and performance fixes and enhancements to Exchange 2013 customers via Cumulative Updates. Exchange 2016 includes all of those enhancements, of course, but it goes further.

Failovers in Exchange 2016 are 33 percent faster than Exchange Server 2013 due to the ability to read from the passive copy of the database. We’ve turned on Replay Lag Manager by default, which automatically plays down replication logs when insufficient database copies are available.

We’re building on previous investments in automated repair, adding database divergence detection to help proactively detect instances of database corruption so you can remediate them well before anyone notices a hiccup. To make operation of Exchange simpler, we introduced Get-MailboxServerRedundancy, a new PowerShell cmdlet that helps you prioritize hardware repairs and makes upgrades easier.

New Outlook web experience

As part of our continuing effort to provide users with a first class web experience, we’ve made significant updates to Outlook Web App, which will be known as “Outlook on the web” going forward. New features include: Sweep, Pin, Undo, inline reply, ability to propose new time for meeting invites, a new single-line inbox view, improved HTML rendering, better formatting controls, ability to paste inline images, new themes, and emojis, to name a few. We’ve also made numerous performance improvements and enhanced the mobile browse experience on phones and tablets.

Outlook on the web 2

Greater extensibility

The Add-In model for Outlook and Outlook on the web, which allows developers to build features right into the user's Outlook experience, continues to get more and more robust. Add-ins can now integrate with UI components in new ways: as highlighted text in the body of a message or meeting, in the right-hand task pane when composing or reading a message or meeting, and as a button or a dropdown option in the Outlook ribbon. Built-in Add-Ins such as My Templates get a user interface makeover. We’ve also introduced new ways of rolling out apps to users, including side-loading of app with a user-to-user sharing model and made it possible for users to install apps directly from the Office store or the Outlook ribbon. Additionally we have added richer JavaScript APIs for attachment handling, text selection, and much more.

Note: Exchange Server 2016 does not support connectivity via the MAPI/CDO library. Third-party products (and custom in-house developed solutions) need to move to Exchange Web Services (EWS) or EAS.

Faster and more intuitive search

As the quantity of email in people’s inboxes continues to grow, it’s essential for them to search through all that email in faster and easier ways. By studying real-world data about how people search and analyzing the speed at which results are returned, we’ve implemented changes to the search architecture and user interface of Office 365, which are now coming on-premises.

The overall speed of server side search is significantly improved in Exchange 2016. But more importantly, the Outlook client now fully benefits from the power of server-side search. When a cached mode Outlook 2016 client is connected to Exchange, it performs search queries using the speed and robust index of the server, delivering faster and more complete results than desktop search.

We’ve also implemented a new, more intuitive search UI in Outlook 2016 and Outlook on the web. As you type, intuitive search suggestions appear, based on people you communicate with, your mailbox content and your query history.

In Outlook on the web, search refiners appear next to the search result set, helping users quickly hone in on exactly what they are looking for within results. And with calendar search, now you can search for events in your calendar and other people’s calendar.

Search refiners

Enhanced Data Loss Prevention (DLP)

Exchange 2013 included built-in DLP capabilities that help protect sensitive information from falling into the wrong hands, and these capabilities are being extended in Exchange 2016. We are adding 30 new sensitive information types to Exchange, including data types common in South America, Asia, and Europe. We are also updating several existing sensitive data types for improved accuracy.

In addition to enhancing these built-in capabilities, we now enable you to configure DLP and transport rules to trigger when content has been classified by a third-party classification system. You can also configure custom email notifications that are sent to recipients when messages sent to them are impacted by your rules.

Faster and more scalable eDiscovery

We’ve made eDiscovery search faster and more reliable by overhauling the search architecture to make it asynchronous and distributing the work across multiple servers with better fault tolerance. This means that we can return results more reliably and faster. Search scalability through the UI is also improved, and an unlimited number of mailboxes can be searched via cmdlet. You also asked for ability to perform eDiscovery searches on public folder content and place the data in public folders on hold to enable long-term archiving, so we’ve added those capabilities in this release.

Auto-expanding archives

To accommodate users who store extremely large amounts of data, Exchange 2016 now automatically provisions auxiliary archive mailboxes when the size of a user’s archive mailbox reaches 100 GB. Thereafter, additional auxiliary archives are automatically provisioned in 50 GB increments. This collection of archive mailboxes appears as a single archive to the user as well as to administrators, accommodating rapid growth of archive data from PST file imports or other intensive use.

Hybrid improvements

Hybrid capabilities allow you to extend your Exchange deployment to the cloud, for example to enable a smooth transition or accommodate mergers and acquisitions. We’re making the hybrid configuration wizard cloud-based, which makes it easier for us to keep it up to date with changes in Office 365.

Hybrid scenarios also enable you to leave all user mailboxes on-premises, while benefitting from cloud services that enhance your deployment – services like Exchange Online Protection; Exchange Online Archiving; Azure Rights Management; Office 365 Message Encryption, and cloud-based Data Loss Prevention. We recently added the Advanced Threat Protection security services to this list, and Equivio analytics for eDiscovery is next up in the queue.

More to come

That’s a quick look at some of the improvements that are part of Exchange Server 2016 Preview. Between preview and final release we’ll add additional features, such as updates to auditing architecture and audit log search. After SharePoint Server 2016 and the Office Web App Server ship their beta versions, you’ll also be able to try out new document collaboration features that help people work with attachments in smarter ways.

How to get started

There is still much to do between now and launch, but we’re excited to put this Preview in your hands. Remember that the Preview can only be used in non-production deployments, unless you are a member of our Technology Adoption Program (TAP). The Preview supports co-existence with Exchange Server 2010 SP3 RU10 and 2013 CU9, for non-production testing. For complete details about the Preview, check out the initial product documentation on the TechNet Exchange Server 2016 library. We’re excited to hear from you as you try out this release!

The Exchange Team

Exchange TLS & SSL Best Practices

$
0
0

Whether you are running Exchange on-premises, in the cloud, or somewhere in between, we know that security is a top priority. Microsoft is committed to giving you the information needed to make informed decisions on how to properly secure your environment.

It has been suggested by some external parties that customers need to disable TLS 1.0 support. One piece of guidance we are aware of suggests taking steps to prepare to disable TLS 1.0 in summer of 2016. Another piece of guidance suggests that TLS 1.0 should not be used with internal-only applications (we do not believe that Exchange is typically used in this manner, as it connects to the outside world via SMTP). While we believe the intentions of both proposals are good and will promote adoption of TLS 1.1 & 1.2, at this time, we do not yet recommend disabling TLS 1.0 on your Exchange Server(s).

Additionally, while TLS 1.1 & 1.2 are superior to TLS 1.0, the real world risks may be somewhat overstated at this point due to mitigations that have been taken across the industry. Of course, security is rarely a binary decision: disabling TLS 1.0 doesn’t suddenly turn something insecure into something secure. That said, we will continue to work towards the goal of making TLS 1.1 & 1.2 work fully with Exchange and a broad array of clients.

More importantly, many customers may not have taken initial steps towards following current best practices. We believe that the first step towards a more secure environment is to have a TLS organizational awareness. While disabling TLS 1.0 on Exchange is not advised at this time, there are definite steps which can be taken today. TLS 1.0 is not widely viewed as insecure when SSL 3.0 is disabled, machines are properly updated, and proper ciphers are used. The current recommendations, which will continue evolving, are as follows:

  • Deploy supported operating systems, clients, browsers, and Exchange versions
  • Test everything by disabling SSL 3.0 on Internet Explorer
  • Disable support for SSL 3.0 on the client
  • Disable support for SSL 3.0 on the server
  • Prioritize TLS 1.2 ciphers, and AES/3DES above others
  • Strongly consider disabling RC4 ciphers
  • Do NOT use MD5/MD2 certificate hashing anywhere in the chain
  • Use RSA-2048 when creating new certificate keys
  • When renewing or creating new requests, request SHA 256-bit or better
  • Know what your version of Exchange supports
  • Use tools to test and verify
  • Do NOT get confused by explicit TLS vs. implicit TLS
  • (For now) Wait to disable TLS 1.0 on the Exchange server

Let’s get started down the list!

Deploy supported operating systems, clients, browsers, and Exchange versions

Perhaps it goes without saying, but the first step to securing any environment is to make sure that all servers, devices, clients, applications, etc. are updated. Most issues that support sees after following recommendations on Exchange are easily fixed with updates already available from the vendor of the incompatible device (printers, firewalls, load balancers) or software (mailers, etc.).

For Exchange, this means test & apply your Windows & Exchange updates regularly. Two reasons for this – first, an environment is only as secure as the weakest link; second, older software typically won’t let you take advantage of the latest TLS versions and ciphers. Make sure firewalls, old Linux MTAs, load balancers, and mass mailer software are all updated. Make sure the multifunction printers have the latest firmware.

Test everything by disabling SSL 3.0 on Internet Explorer

Disabling SSL 3.0 in the browser is a good first step, because it insures that all your users remain safe, no matter where they may browse. Additionally, it easily allows you to test to make sure that websites and applications will continue to work or not. There’s still a small bit of the Internet that is still relying on SSL 3.0, but the time is overdue for it to be retired. To test your environment with Internet Explorer, follow KB3009008.

image

Disable support for SSL 3.0 on the client

After testing, you may also consider disabling it at the SCHANNEL layer for all clients. While you are viewing these settings, make sure that your clients have TLS 1.1 & 1.2 enabled. In most cases, the most recent version supported by both the client & server will be used. This is a good way to start moving towards a more secure environment. All supported versions of Windows have TLS 1.1 & 1.2 capabilities, but the older ones may not have them enabled by default.

Note that registry changes under SCHANNEL are only good for applications that use the SCHANNEL API. Some applications could utilize 3rd party or open source security APIs (like OpenSSL) which may not look at these registry keys. Also, note that changes do not take effect until reboot.

Disable support for SSL 3.0 on the server

The next recommendation is to disable SSL 3.0 on all servers, Exchange included.Do this by following all recommendations in the original security bulletin. Since servers can be both clients and servers, it is recommended to follow all applicable steps. As before, while you are viewing these settings, make sure that your servers have TLS 1.1 & 1.2 enabled.

image

Note: Any of these registry changes require a reboot to take effect!

You can do this with confidence because TLS 1.0 will be the minimum which you support. Exchange and Windows have both supported TLS 1.0 for over a decade. TLS 1.0 itself is not considered vulnerable when SSL 3.0 is disabled on clients and servers. In fact, most Exchange sessions already have been using TLS 1.0 or even later, for years. You are simply disabling the ability for the session to be downgraded to SSL 3.0. Disabling SSL 3.0 is typically not too impactful except for clients and devices that are older than (roughly) 10 years old.

These recommendations should have already been carried out in your organization with haste. Even so, the POODLE vulnerability itself does require someone to intercept the traffic and sit between the client and server during the initial session negotiation. While this is not super difficult to accomplish, it is also not trivial. It is a much more severe problem for users who travel and for mobile devices which use hotspots. As many customers do support remote access to email, this is something for Exchange administrators to worry about. Since some mobile device vendors have not released ways to disable SSL 3.0, you can at least keep your Exchange resources safe by disabling SSL 3.0 on the server side.

In addition, enabling support for TLS v1.1 and v1.2 are highly recommended. But leaving TLS 1.0 enabled is a good thing for now. Clients and applications should always prefer the most secure option, provided that Windows, the application, and the client all support it.

Note: If you terminate SSL at load balancers, you’ll want to disable SSL 3.0 there as well (and perform subsequent steps there in addition). Check with your vendor to get their guidance. Also, be sure to check all Exchange servers which may be sharing a single VIP or DNS record.

Office 365 completed these changes, and you will find that SSL 3.0 is not possible for any protocol.

Prioritize TLS 1.2 ciphers, and AES/3DES above others

The next step we recommend is based on a step we took in Office 365 to prioritize the latest ciphers which are considered much more resilient to brute force attack. The thing with ciphers is that it isn’t just about enabling the most secure one and disabling the rest. You want to offer several choices for clients to allow maximum compatibility. You typically want to disable the ones which are the least secure, but leave others to provide choice. The negotiation of a particular cipher depends on:

  1. The client passes an ordered list of ciphers which it supports
  2. The server replies with the best cipher which it has selected (server gets final say)

Changing the order on the server can minimize the use of a less secure cipher, but you may want to go further and disable it completely. Cipher changes are made through this registry key, explained here.

image

Strongly consider disabling RC4 ciphers

Of course, there is risk of some clients not continuing to work if you disable too many ciphers. That said, Microsoft has been recommending that disabling RC4-suite of ciphers is a good best practice. It is considered to be a weak cipher. Disabling RC4 should be done with some care as it can introduce incompatibilities with older servers and clients, though problems should be minimal as supported versions of Windows have supported 3DES and AES alternatives for years. The rollout of this in Office 365 is in progress and should be completed shortly.

Do NOT use MD5/MD2 certificate hashing anywhere in the chain

Ciphers depend on the certificate chain being used - you can introduce problems when connecting to a host which has an insecure signature algorithm used in their chain. For example, we have seen that Office 365 SMTP transport is no longer able to connect to hosts with MD5 and MD2 hashing because they do not support modern ciphers. This applies to the certificate and any certificates in the chain. We see this with SMTP because Exchange is acting as a client, and because there are many older SMTP systems and firewalls still out there.

Use RSA-2048 when creating new certificate keys

Some things to watch out for when you renew or reissue certificates. First is that when creating your requests, use 2048-bit RSA. Anything less is not considered secure anymore.

When renewing or creating new requests, request SHA 256-bit or better

Second, when you renew, you should consider moving the signature algorithm from SHA1 to SHA2 if you haven’t already done so. This isn’t considered something that you need to worry about until renewal time, unless your certificate happens to be good for another couple of years – in which case, go ahead and take care of it now.

You can check your Exchange certificates with a browser (or in Certificate Manager MMC):

image

This example certificate was generated with Exchange 2013 on Windows 2012 R2. It has an RSA 2048-bit key and has an RSA SHA256 (SHA-2) signature algorithm.

Know what your version of Exchange supports

Some applications sometimes need to be re-compiled and tested to take advantage of these new protocols. So, every part of Exchange and Windows-based clients need to be examined and tested thoroughly. Currently, for Exchange Server, we are aware of the following limitations:

  • SMTP– key piece of Exchange server infrastructure – support for TLS 1.1 and 1.2 were added in Exchange Server 2013 CU8 and Exchange Server 2010 SP3 RU9. This means if you want to add support for the latest ciphers and TLS versions, you may need to apply an update.

IMPORTANT:SMTPis the main protocol used when communicating outside of your organization, something which is a key purpose of email. Ifyou disable TLS 1.0, SMTP would no longer be able to use Opportunistic TLS with any external party which doesn’t support TLS 1.1 or 1.2. Emails will then be sent/received in the clear, which is certainly significantly less secure than TLS 1.0. That said, we have enabled new logging in the Exchange SMTP protocol logs to allow you to audit the impact of future changes on SMTP.

Additional Note: SMTP is notably a protocol where Exchange acts as both a client and a server. Some older server implementations have been observed to incorrectly implement version negotiation.  In these cases, the remote servers terminate the connection when Exchange (acting as a client) offers a version newer than TLS 1.0.  This results in a complete stoppage of email to these systems. Fortunately, these situations are becoming rare as time passes, but this is pointed out because the effects often are more impactful than a mail client which cannot connect.

  • POP/IMAP– not used as frequently in all environments, but if you do, beware that we only currently support TLS 1.1 and 1.2 on-premises in the Exchange Server 2016 Preview. We hope to make this available in a future CU, or you can make a request for it via proper channels so we can prioritize it. Office 365 already has this support.
  • HTTPS (OWA, Outlook, EWS, Remote PS, etc.)– The support for TLS 1.1 and 1.2 is based on the support in IIS itself. Windows 2008 R2 or later supports both TLS 1.1 and 1.2, though the specific version of Windows may have these disabled or enabled by default. There is another important caveat here: the HTTPS proxy between CAS and Mailbox requires TLS 1.0 in current versions of Exchange Server – so disabling TLS 1.0 between CAS and Mailbox causes the proxy to fail. This is also something we have addressed in the Exchange 2016 Preview. We hope to make this available in a future CU, or you can make a request for it via Support. If you have dedicated roles, you can technically disable TLS 1.0 between the client & CAS, but we still are not recommending this. Office 365 already supports TLS 1.1 & 1.2, if the client supports them.
  • Clients – TLS 1.0 is universal, with near 100% support. Though TLS 1.1 and 1.2 are growing more common, many Exchange clients still do not work with anything but TLS 1.0. For example, at this time, we are tracking multiple issues with Outlook running on Windows 8.0 or older. We are hoping to address these issues soon, but with Windows 7 commonly running in most customer environments, this is a really good reason to not disable TLS 1.0 yet. Comprehensive testing of other clients running without TLS 1.0 has not been completed by Microsoft at this time.

Note: Windows Remote Desktop may also have challenges, depending on your version of Windows. For servers which are managed remotely, be sure to test this first.

Use tools to test and verify

There are several tools and websites you can go to for testing your server(s) and clients. It is highly recommended to do so. Some offer a grading/scoring system. Others offer pass/fail. We’re inclined to recommend one with a scoring system, since security is about risks and tradeoffs. Don’t be surprised if one or more of these tools doesn’t fully test for POODLE and just thinks TLS 1.0 is bad. Use your newfound knowledge to read the results for what they are.

We prefer tools that let you check specific things (like cipher order, or individual TLS/SSL versions) in addition to the blanket “vulnerability tests”. There is also one fantastic (non-Microsoft) website called SSLLabs which simulates multiple clients and can warn you of compatibility issues with the clients which it knows about. For example, here we see that disabling TLS 1.0 would likely cause issues with older versions of Android clients:

image

In addition, you can see how you compare with the rest of the Internet. This is great for HTTPS. Most certificate vendors have test tools available as well, though they have differing coverage of what is tested.

Other tools are available which test additional protocols. Here is a test being run against IMAP on port 993 (referred to as the “SSL binding”; see below for explanation):

image

As you can see, even on port 993, TLS 1.0 is used with AES256.

Do NOT get confused by explicit TLS vs. implicit TLS

In the course of human events, shortcuts are taken. One unfortunate shortcut occurred when TLS 1.0 added optional support for a per-protocol implementation of STARTTLS, also known as “explicit TLS”. Prior to “explicit TLS”, if a server application level protocol wanted to implement SSL/TLS in addition to a non-secure option, it had to take up a separate port on the machine for each. This is “implicit TLS”. See the following chart:

ProtocolIANA port (Explicit TLS)ProtocolIANA Port (Implicit TLS)

E-SMTP

25

SMTPS

465**

POP3

110

POPS

995

IMAP4

143

IMAPS

993

HTTP

80*

HTTPS

443


* HTTP doesn’t implement explicit TLS, because it is stateless and the overhead would not be worth it.
** Exchange specifically does not support SMTPS (implicit TLS).

The first protocol which implemented this verb was ESMTP. By doing so, SMTP could support clients & servers on the same port, and could also easily implement “opportunistic” TLS/SSL. In fact, Exchange has never supported SMTPS (465), although we do reuse that port by default in Exchange 2013 for one of the three transport roles. For POP and IMAP, Exchange supports both the explicit option and the implicit option.

What can be confusing is that because STARTTLS didn’t come about until TLS 1.0 – some people started confusing explicit TLS with “TLS” and some mail applications started using the terminology interchangeably. So, disabling port 995 & 993 does not turn off SSL 3.0 (you are disabling implicit POPS & IMAPS, but not SSL) – nor is enabling port 110 & 143 (explicit TLS) required for TLS 1.x. The terminology is confusing, but the concepts are mostly unrelated. This unfortunate optimization was brought into Exchange:

image

However, tinkering with ports and implicit/explicit should not be necessary as you are NOT disabling SSL 3.0 by doing so. Securing Exchange Server shouldn’t mean changing any of these settings – just the SCHANNEL registry settings discussed above.

(For now) Wait to disable TLS 1.0 on the Exchange server

In summary, as of July 2015, Exchange currently supports TLS 1.0, but can also support TLS 1.1 & 1.2 with the following minimum requirements met:

ProtocolTLS v1.1/1.2 Minimum Requirements
SMTPExchange 2013 CU8 or Exchange 2010 SP3 RU9
POP/IMAPExchange 2016 Preview
HTTP (server)

Windows 2008 R2;
MAPI clients must run Windows 8.1 or later

HTTP (proxy to MBX)Exchange 2016 Preview

As you can see, since Exchange Server 2016 isn’t released yet as an in-market product (it is for lab use only at this time), and since Windows 7 is still the most prevalent Windows version, it is quite impractical to fully disable TLS 1.0. Not only will POP/IMAP break (for lack of TLS 1.1 and 1.2 support), but you cannot disable TLS 1.0 on any Exchange server running the mailbox server role. Most importantly, disabling TLS 1.0 will result in compatibility issues with some common mobile devices, clients, and possibly interrupt some Internet email.

Don’t panic – if you have disabled SSL 3.0 and decided on a cipher order that your organization can agree on, you are likely quite secure, and you are not vulnerable to the POODLE attack. Microsoft is committed to adding full support for TLS 1.1 and 1.2. TLS v1.3 is still in draft, but stay tuned for more on that. In the meantime, don’t panic.

image

On a test Exchange lab with Exchange 2013 on Windows Server 2012 R2, we were able to achieve a top rating by simply disabling SSL 3.0 and removing RC4 ciphers. This is nearly as good as one can achieve at the time of this posting on released versions of Exchange without impacting common clients.

image

Additionally, this configuration should be highly compatible with nearly all clients and devices from the past decade or more, while utilizing the latest security with clients which do support it. Of course, security requires a watchful eye as new threats and vulnerabilities are discovered from time to time. As always, stay tuned to Security Bulletins and updates.

Scott Landry
Senior Program Manager, Exchange Supportability


Hybrid deployment best practices

$
0
0

Many Office 365 customers are using our hybrid deployment option since it offers the most flexible migration process, the best coexistence story, and the most seamless onboarding user experience. However, even with all of this flexibility, a few wrong choices in the planning and deployment phase could cause you to have a delayed migration, unsupported configuration or have poor experience. This article will help you make the best choices for your hybrid configuration so you can avoid some common mistakes. For more information on Exchange hybrid go here.

image
* As of this writing, Exchange 2016 is in Preview. It is not meant for production use. You would never install that in your production environments… right?

Ensure your on-premises Exchange Deployment is healthy

Some of our best guidance for configuring hybrid comes from the Exchange Deployment Assistant (EDA), however the Exchange Deployment Assistant separates the on-premises configuration from the hybrid configuration. There is an unwritten assumption that is made in our hybrid guidance that you have already properly deployed and completed the coexistence process with the current versions of Exchange in your on-premises environment. You really should ensure the existing environment is a healthy environment prior to starting Exchange hybrid configuration steps.

This means that if the newest version of Exchange in your environment is Exchange 2010, you need to deploy the right amount of 2010 servers to handle the normal connection and mail flow load for all of your on-premises mailboxes. Similarly, if the latest version in your environment is Exchange 2013, you need to deploy enough 2013 servers to handle the load. For more information on on-premises Exchange Server sizing go here.

Note: There is always an exception to the rule. In this case the exception is mail flow. There is a possibility that you may configure hybrid so all mail flows through your on-premises environment even after you move most of your mailboxes to Exchange Online. You may even have some applications that rely on the on-premises Exchange servers for SMTP relay. All of this needs to be accounted for and some extra thought may need to go into your sizing plans for these scenarios. Currently, our toolset for planning and sizing your mail routing environments do not cover these more complex scenarios.

If you think about a typical hybrid deployment, on day one there are essentially no mailboxes in the cloud. Therefore, you most likely have an environment that can handle all of the current on-premises workflow. Then as you move mailboxes to Exchange Online the load on the on-premises servers reduces since much of the client connectivity and mail flow tasks are now handed off to Exchange Online. The minor amount of processing power that is needed on-premises for things like cross premises free busy for an Exchange Online mailbox after it is moved will not come close to the demands of an on-premises mailbox, for example.

Should we have a hybrid specific URL?

We have seen deployments where a decision is made to keep the existing Mail.Contoso.com and Autodiscover.Contoso.com pointing to a bank of Exchange 2010 servers and have a new hybrid URL, such as hybrid.Contoso.com, pointing to a couple of Exchange 2013 servers. This is an example of an environment that did not introduce Exchange 2013 in a recommended way. Let’s forget about hybrid for a second. When you introduce Exchange 2013 into an environment you should configure coexistence in a supported way. This means that you install enough Exchange 2013 servers to handle the proxy load for all on-premises mailboxes and point the external URL to the latest version of Exchange in the site. Again, deploy the latest version properly before you enter a hybrid configuration.

Keep Exchange up to date

The Cumulative Update (CU), Rollup, and Service Packs you have running on the on-premises server should also not be overlooked. Under normal circumstances we support you being no more than two updates behind the currently released update for Exchange; however, for hybrid environments, we are stricter and you should not be more than one build behind. If the latest update is Exchange 2013 CU9, then you must have either Exchange 2013 CU9 or CU8 to be considered in a supported state. We are stricter with our hybrid requirements because of how tightly the on-premises and Exchange Online environments will be coupled together. For more information on our available updates please go here.

Some might ask: “Can I keep just my hybrid server up to date?” The answer: there is no such thing as a “hybrid server.” (More on that in a minute.) What this question is really asking is: “Can I just update the server were I plan to run the Hybrid Configuration Wizard (HCW) from?” The answer to that is “No.” As we move through this post, you will see how entering into a hybrid world means most of your servers are playing a part and communicating cross premises. In order for you to have a seamless experience and be supported, you need your whole environment to be up-to-date, not just a specific server or two.

If you have a healthy updated on-premises configuration, you will have a proper foundation for introducing a Hybrid configuration into your messaging environment in a supported and optimal way.

There is no such thing as a ‘hybrid server’

We often hear people say “I am going to deploy a hybrid server,” thinking they will designate specific three or four servers as “hybrid servers.” However, they fail to realize that hybrid is a set of organization-wide configurations and the server where the HCW is run from is just there to initiate these configurations.

To explain this, let’s briefly cover a free/busy scenario. When an on-premises user creates a meeting request and looks up a cloud user’s free/busy information, the request will go to the EWS URL returned from Autodiscover (step 1 below) and that server will facilitate the request by initiating the Availability service to talk to the O365 service (step 2 below). At this point, that could be ANY server in the environment. This means that when you configure hybrid, all 2010 CAS, 2013 Mailbox, and 2016 servers (when this will be supported) in the environment could be facilitating a federated free/busy request. There is no reasonable way to direct outbound federated free/busy requests to a particular set of servers.

image
From on-premises to EXO

Let’s look at the reverse scenario and explain what happens when a cloud user looks up an on-premises user’s free/busy information. In this scenario, the EXO server would perform an Autodiscover request to determine the on-premises EWS endpoint (step 1 below) and any server that responds to that Autodiscover.Consoto.com or Mail.Contoso.com endpoints would be responsible for facilitating the Autodiscover or free/busy request (step 2 below). The thing to keep in mind is that these are the same endpoints for all of the on-premises users for things like client connectivity, so you would not want to limit them to one or two servers in a larger environment. In short, you should deploy Exchange properly into your environment, then do your hybrid configuration.

image
From EXO to on-premises

Part of this confusion could be because in the HCW we ask users for CAS and Mailbox servers. The reason we ask for the CAS is so that the receive connectors on these servers can be configured. The reason we ask for the Mailbox is to ensure that we properly configure the send connectors. Selecting those servers is not selecting your “Hybrid servers” it is just for mail flow control explicitly. We do not have any concrete recommendation around which servers or how many of them should be added for mail flow purposes. There are just too many factors with mail flow such as seat count, migration schedules, geographies, etc.

Be sure to choose the correct version of Exchange

The Hybrid Configuration Wizard can be run from Exchange 2010, 2013, and soon 2016 so the question is often asked: “What version should I run the HCW from?” Let’s go through some of the decisions that will have to be made to help answer this.

Do you have Exchange 2003?

If Exchange 2003 is in your environment, then your only option for going hybrid will be to use Exchange 2010. This means that you would need to ensure that you have properly deployed and sized the Exchange 2010 environment, and then you can run the hybrid configuration process.

Is Exchange 2007 the oldest version you have deployed?

If you have Exchange 2007, and you do not already have Exchange 2010 deployed then we would recommend you properly deploy Exchange 2013, then deploy hybrid. This will give you the largest feature set, and since you have to introduce a newer version of Exchange, you should deploy a version that is supported under mainstream support.

Have you deployed Exchange 2010?

If this is the case, you need to ask yourself if Exchange 2010 fits your needs or if you need the features of Exchange 2013. Deploying hybrid with Exchange 2013 allows for features like cross-premises e-discovery, enhanced secure mail, or OAuth federation support. If these features are not important to you, then you can stick with Exchange 2010 on-premises and deploy hybrid.

In the event you want to upgrade your on-premises environment to Exchange 2013, you would need to deploy Exchange 2013 following our best practices guidance and deploy enough Exchange 2013 servers to handle all of the on-premises traffic. This includes going through the proper steps to size and deploy Exchange 2013 for your on-premises environment and following the guidance for properly setting up hybrid configuration. Often customers will use the Exchange deployment assistant two times for this. First time to introduce Exchange 2013 into the Exchange 2010 environment and the second time to introduce hybrid.

Is your newest deployed version Exchange 2013? Are you planning for Multi-Org hybrid?

Aside from the OAuth configuration previously mentioned, Multi-Org hybrid requires at least one Exchange 2013 (or later, when this is supported) server on-premises in every forest that will be entering into the multi forest hybrid configuration. HCW for Exchange 2010 does not have the proper logic to handle the naming conventions used for connectors and organization relationships. For more information on Multi-Forest hybrid go here.

A simpler story is ahead for Exchange 2016

When we release Exchange 2016 the deployment guidance for coexistence with Exchange 2013 will be a lot simpler than in the past. You will no longer have to move your URL’s to the newest version of Exchange, and instead, will be able to add one or two Exchange 2016 servers to the pool of servers that respond to the Autodiscover.contoso.com and Mail.Contoso.com endpoints. This means you will not have to stand up enough servers running the latest version to handle all traffic on day one.

image

While this will not benefit customers that are running older versions of Exchange, customers who are upgrading from Exchange 2013 to Exchange 2016 will go through a really easy and seamless process.

In summary

Taking a bit of time to cleanup your current infrastructure and understand your options for your hybrid deployment can save you a lot of time and aggravation later.

Lou Mandich, Scott Roberts, Ross Smith IV, Scott Landry, Timothy Heeney

A brave new world for Exchange 2016 cmdlet reference topic delivery and updates

$
0
0

Update-ExchangeHelp is an Exchange cmdlet that installs the latest available Exchange cmdlet reference help topics on the Exchange server for use in the Exchange Management Shell (Get-Help <Cmdlet>). Although there are ostensibly Windows PowerShell cmdlets for this same task (Update-Help and Save-Help), they don’t work with the Exchange Management Shell.

Update-ExchangeHelp was available in Exchange 2013, but we haven’t previously used it to its full potential. That’s about to change for Exchange 2016. We’re going to rely heavily on Update-ExchangeHelp to release new and updated Exchange cmdlet reference topics for the Exchange Management Shell for all Exchange 2016 product releases (RTM and Cumulative Updates (CUs)). In fact, in a given Exchange product release, you may find that some Exchange cmdlet reference topics aren’t fully documented at the command line.

And how, is this a positive, you may ask? By changing how we approach updating cmdlet reference topics, we can achieve the following benefits:

  • Increased quality for the cmdlets customers actually use   We can concentrate on writing help for new cmdlets and updates to high-usage cmdlets first. We have 2+ years of customer usage data for the Exchange 2013 cmdlet reference topics on TechNet. Instead of attempting to update every single cmdlet topic completely for an Exchange release, as we’ve done in the past (think: wide and shallow), we can concentrate on making updates to the new and most highly viewed cmdlets first (think: narrow and deep).
  • Timely localization   For a given Exchange release, the English help for Exchange cmdlets comes first, and the translated versions follow. This means that localized Exchange cmdlet help for RTM was available at the command line for CU1, localized Exchange cmdlet help for CU1 was available at the command line for CU2, etc. Using Update-ExchangeHelp, we can make the localized Exchange cmdlet help available as soon as it’s ready without having to wait for the next release.

For an Exchange product release, we’ll target the highest priority updates for Exchange cmdlet reference topics, and get those updates into Exchange code for availability at the command line. After the Exchange product release, we’ll continue to work on updates to Exchange cmdlet reference topics, and publish them to TechNet and as downloadable update packages for Update-ExchangeHelp. At your convenience before the next Exchange product release, you can use Update-ExchangeHelp to download the updated Exchange cmdlet reference topics for the Exchange Management Shell.

We’ll try to balance the frequency of update package releases between Exchange product releases. An Exchange product release allows us to get updated Exchange cmdlet reference topics into Exchange code, so there’s no need to release a downloadable update package right before an Exchange product release. Given the quarterly status of Exchange product releases, it’s reasonable to expect one or two English update packages releases per quarter, and no more frequently than a month or so apart. For localized versions, it’s likely we’ll release one update package per quarter. The English versions of the topics will always be ahead of the localized versions, but the gap will be smaller.

Here’s an example. There’s an Exchange product release, and not all of the Exchange cmdlet reference topics are complete. We’ll continue to work on the topics, and one month after the release, we’ll publish an update package for English. We’ll localize that update package and publish it as soon as it’s ready (likely, a few weeks after the English update package). We’ll continue to work on incomplete cmdlet reference topics in English, and we’ll publish another update package for English about a month after the first one. At this point, we’re two months into the quarter, and the next Exchange product release is likely only one month away. We’ll continue to improve the cmdlet reference topics, and we’ll check them into Exchange code for that next product release. When that Exchange product release goes public, the cycle starts over again.

Another key factor in this strategy is notification. You can periodically run Update-ExchangeHelp to check for updates, but that’s not ideal. We’ll likely use an RSS feed to notify when an updated package is available.

How it works

Using Update-ExchangeHelp is pretty straightforward: you run this command in the Exchange Management Shell on an Exchange server, or on a computer that has the Exchange Management Tools installed.

Update-ExchangeHelp -Verbose

The Verbose switch is important, because it gives you status messages, like “your server is already up-to-date” or “you already tried this within the last 24 hours.” To bypass the 24-hour limit and run the command more frequently, you can add the Force switch.

The problem? The Exchange server requires Internet access so it can download the update package. Not a big deal for some, but a deal-breaker for others. To work around this, read on.

Offline mode for Update-ExchangeHelp

Basically, there are the 4 steps you need to follow to customize Update-ExchangeHelp so it looks for updates on your local network.

  1. Download and inspect the ExchangeHelpInfo.xml manifest file.
  2. Download the update packages, publish the update packages on an internal web server, and customize the ExchangeHelpInfo.xml file.
  3. Publish the customized ExchangeHelpInfo.xml file.
  4. Modify the registry of the Exchange servers to point to the customized ExchangeHelpInfo.xml file.

Step 1: Download and inspect the ExchangeHelpInfo.xml manifest file.

Open http://go.microsoft.com/fwlink/p/?LinkId=287244, save the ExchangeHelpInfo.xml file, and open the file in Notepad. Here’s a hypothetical example of the contents of the ExchangeHelpInfo.xml file:

<?xml version="1.0" encoding="utf-8"?>
<ExchangeHelpInfo>
  <HelpVersions>
    <HelpVersion>
      <Version>15.01.0225.030-15.01.0225.050</Version>
      <Revision>001</Revision>
    <CulturesUpdated>en</CulturesUpdated>
<CabinetUrl>http://download.microsoft.com/download/8/7/0/870FC9AB-6D22-4478-BFBF-66CE775BCD18/ExchangePS_Update_En.cab</CabinetUrl>
    </HelpVersion>
    <HelpVersion>
      <Version>15.01.0225.030-15.01.0225.050</Version>
      <Revision>002</Revision>
      <CulturesUpdated>de, es, fr, it, ja, ko, pt, pu, ru, zh-HanS, zh-HanT</CulturesUpdated>
<CabinetUrl>http://download.microsoft.com/download/8/7/0/870FC9AB-6D22-4478-BFBF-66CE775BCD18/ExchangePS_Update_Loc.cab</CabinetUrl>
    </HelpVersion>
    <HelpVersion>
      <Version>15.01.0225.030-15.01.0225.050</Version>
        <Revision>003</Revision>
      <CulturesUpdated>en</CulturesUpdated>
<CabinetUrl>http://download.microsoft.com/download/8/7/0/870FC9AB-6D22-4478-BFBF-66CE775BCD18/ExchangePS_Update_En2.cab</CabinetUrl>
      </HelpVersion>
    </HelpVersions>
</ExchangeHelpInfo>

Each available update package is defined in a <HelpVersion> section, and each <HelpVersion> section contains the following keys.

  • <Version>   Identifies the version Exchange that the update package applies to. 15.01.xxxx.xxx is Exchange 2016. 15.00.xxxx.xxx is Exchange 2013. This key might specify one version or a range of versions.
  • <CulturesUpdated>   Identifies the language that the update package applies to. This key might specify one language or multiple languages.
  • <Revision> Identifies the order that the updated packages were released for the major version of Exchange. In other words, the first update package released for Exchange 2016 is 001, the second is 002, etc. And, there's no relationship between the update packages and the order they were released in. For example, 001 might be an English only update, 002 might be an update for all other supported languages, and 003 might be a German-only update.
  • <CabinetUrl>   Identifies the name and location of the update package for the <HelpVersion> section.

The update package that's defined in a <HelpVersion> section applies to an Exchange server based on the combination of <Version> and <CulturesUpdated> values.

You might find that multiple <HelpVersion> sections apply to your Exchange servers for a given version of Exchange. For example, there might be multiple updates for the same language, or separate updates for different languages that both apply to your Exchange servers because you have multiple languages installed. Either way, you need only the most recent update for your Exchange server version and language based on the <Revision> key.

For example, suppose your Exchange servers are running Exchange 2016 version 15.01.0225.040 with English and Spanish installed, and the ExchangeHelpInfo.xml manifest file looks like the example mentioned above.

In this example, all the updates apply to you based on the version of Exchange. However, you need only revision 003 for English, and revision 002 for Spanish. You don't need revision 001 for English because revision 003 is newer.

Step 2: Download the update packages, publish the update packages on an internal web server, and customize the ExchangeHelpInfo.xml manifest file.

The easiest and least time-consuming thing to do is to act like every available update package applies to you.

  1. Download all of the .cab files that are defined in the ExchangeHelpInfo.xml file by using the URL that’s defined in the <CabinetUrl> value.
  2. Publish those .cab files on an Intranet server (for example http://intranet.contoso.com/downloads/exchange).
  3. Modify the <CabinetUrl> values in the ExchangeHelpInfo.xml file to point to the .cab files on the Intranet server (for example, http://intranet.contoso.com/downloads/exchange/<cabfile>).
  4. Save the customized ExchangeHelpInfo.xml file.

The benefits to this approach?

  • Not much thought involved. It’s difficult to make a mistake and accidentally miss an update that applies to you, because you’re grabbing every available .cab file.
  • Easier maintenance. Whenever we release an update package, you just download the new ExchangeHelpInfo.xml file, and grab every new .cab file that’s defined in it.

The drawback to this approach?

  • It’s pretty much guaranteed that you’ll download update packages than you don’t need based on version and language.
  • Space is consumed on the Intranet server by irrelevant .cab files that you don’t need.

If you want to identify only the .cab files that apply to you, follow these steps.

1. Find the version details for your Exchange servers

a. To find the version details on a single Exchange server, run the following command in the Exchange Management Shell.

Get-Command Exsetup.exe | ForEach {$_.FileVersionInfo}

b. To find the version details for all Exchange servers in your organization, run the following command in the Exchange Management Shell.

Get-ExchangeServer | Sort-Object Name | ForEach {Invoke-Command -ComputerName $_.Name -ScriptBlock {Get-Command ExSetup.exe | ForEach{$_.FileVersionInfo}}} | Format-Table -Auto

The result for ProductVersion will be in the format 15.01.0225.xxx.

2. Find the <HelpVersion> sections in the ExchangeHelpInfo.xml file that apply to you based on the values of the <Version>, <CulturesUpdated>, and <Revision> keys. The methodology was described in Step 1.

After you identify the .cab files that apply to you, follow these steps:

  1. Download the applicable .cab files by using the URL that’s defined in the <CabinetUrl> value.
  2. Publish those .cab files on an Intranet server (for example http://intranet.contoso.com/downloads/exchange).
  3. Modify the <CabinetUrl> values in the ExchangeHelpInfo.xml file to point to the .cab files on the Intranet server (for example, http://intranet.contoso.com/downloads/exchange/<cabfille>).
  4. If you like, you can also remove the <HelpVersion> sections that don’t apply to you.
  5. Save the customized ExchangeHelpInfo.xml file.

Step 3: Publish the customized ExchangeHelpInfo.xml file

In the previous step, you customized the ExchangeHelpInfo.xml file by changing the <CabinetUrl> values to point to the .cab files on an Intranet server. Now, you need to publish the customized ExchangeHelpInfo.xml file on an Intranet server (for example, http://intranet.contoso.com/downloads/exchange/ExchangeHelpInfo.xml). Note that there's no relationship between the ExchangeHelpInfo.xml file and .cab file locations. You can have them available at the same URL or on different servers.

Step 4: Modify the registry of the Exchange servers to point to the customized ExchangeHelpInfo.xml file

You need the Intranet location for the ExchagneHelpInfo.xml file that you configured in the previous step. This example uses the value http://intranet.contoso.com/downloads/exchange/ExchangeHelpInfo.xml as an example.

1. Copy and paste the following text into Notepad, customize the URL for your environment, and save the file as UpdateExchangeHelp.reg.

Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v15\UpdateExchangeHelp]
"ManifestUrl"="http://intranet.contoso.com/downloads/exchange/ExchangeHelpInfo.xml"

2. Run the UpdateExchangeHelp.reg file on your internal Exchange servers.

Maintenance and use of Update-ExchangeHelp

Now when you run Update-ExchangeHelp in the Exchange Management Shell on your Exchange servers, the command gets information and downloads files from the Intranet locations you specified. That’s easy.

What’s less easy is the long-term maintenance of this customized setup. Basically, you'll need to repeat Steps 1 through 3 when you discover a downloadable update package has been made available for Exchange cmdlet reference help, and you want to deploy that updated help to your Exchange servers (everything but the registry modification).

Chris Davis

Deprecating the REST API preview endpoint

$
0
0

While we do not cover Developer subjects very often on this blog, we wanted to let you know about this:

As some of you know, the Outlook REST APIs moved from preview to general availability in October 2014. As part of this transition, we are shutting down the old preview endpoint https://outlook.office365.com/ews/odata on October 15th, 2015. You can continue to use the https://outlook.office.com/api/beta endpoint to check out our latest and greatest APIs. We require all apps & services using the https://outlook.office365.com/ews/odata endpoint to move to https://outlook.office.com/api/v1.0 endpoint by October 15th, 2015. This will also enable the apps to benefit from API enhancements being added continuously to https://outlook.office.com/api/beta and https://outlook.office.com/api/v1.0.

Where do I go to find more about it?

If you want to know more about what features are supported by the https://outlook.office.com/api/ endpoint, go to https://dev.outlook.com/, where you will find getting started materials and API references. Please check https://dev.outlook.com/ for any future updates. You can also post your questions on Stack Overflow with outlook-restapi tag.

Deepak Singh

Introducing the Microsoft Office 365 Hybrid Configuration Wizard

$
0
0

Running Exchange 2013 CU8 or higher? Download the new wizard!

The Exchange hybrid team has been working hard over the past year getting the 3rd version of the Hybrid Configuration Wizard (HCW) ready. This new version is called the Microsoft Office 365 Hybrid Configuration Wizard. This article tells you what’s new and shows you how to run the wizard. We also explain the various issues that have been addressed with the new Wizard, and touch on some of the telemetry we pull with every run of the wizard. We think this new wizard has enough of the old to reduce the learning curve while adding plenty of enhancements to make your hybrid deployment as friction free as possible.

Microsoft Office 365 Hybrid Configuration Wizard Stand-Alone Application

This version of the HCW is a standalone application that is downloaded from the service. This is an important change because one of the bigger limitations of the previous versions of the HCW was that it was included with the on-premises product. This led to the following issues:

  • Up-To-Date hybrid experience: When you ran the HCW you got the experience consistent with your on-premises version of Exchange Server. This meant that if you are running Exchange 2013 CU7 you got the CU7 experience. If you ran Exchange 2013 CU9 you got the CU9 experience in HCW. Each customer would have a different HCW experience.

Solution: The new HCW will download the latest version every time it is run, therefore providing the latest and improved experience. As soon as we make changes to, or fix any issues in the HCW, customers will see the benefits immediately.

  • HCW not tied to Cumulative Updates: Since the previous versions of the HCW were part of the on-premises product they were updated per the regular Exchange Serviceability model. This means that the hybrid team had to wait for a new Cumulative Update (Rollup for Exchange 2010) every three months to deliver any enhancements or changes. For a component like hybrid that is a problem, we have to be agile enough to handle changes not just to on-premises, but also in the service.

Solution: Again, every time you attempt to run the HCW we will ensure you have the latest version. This version will of course go through its rounds of validation, but it is in no way tied to the releases of a CU. No more waiting months for fixes!

  • Piloting Changes: As we move forward with this new HCW we will be making some aggressive changes. In the months ahead we want to add more capabilities to HCW. One of the most important changes in HCW will be the ability to roll out feature changes slowly and in a controlled manner.

Solution: We have built in the capability to allow customers who are on “first Release” and any other customers we specify (for example TAP customers) to see the latest version of the HCW. Often the latest release and the production release will be the same version, but we do have the ability to pilot versions of the HCW as needed.

Improvements to error handling

The HCW has a lot of dependencies and relies on various prerequisites for a successful completion. For example, you have to add an external TXT record for the HCW to create the Federation Trust, you have to have your certificates properly installed on your Exchange servers, and you have to have Internet access from your Exchange servers to name a few. I am not trying to scare you away from hybrid, in fact the wizard does walk you through most of the prerequisites. I am instead trying to point out that there are many failure points for the HCW to contend with.

Up till now the solution was to provide you an error message that included a stack trace. These error messages are extremely difficult to decipher and often the first reaction after a couple of failed Internet searches was to call into support. Figure 1 shows the old error Experience for those that may not be familiar with it.

image
Figure 1: Old error messages

Our goal is to allow you to successfully configure without an error, but we also want to make sure that we give you the information needed to get past any hurdles you may face. Figure 2 shows a sample of the new (much more informative) error experience. In the sample you can see the following major improvements to the error experience:

  • Improved Title: We have added the ability to see what Phase and Task were being completed at the time of the failure. For instance, you can know if we failed at the prerequisite check or configuration phase. You will also immediately know if you failed to create the Organization Relationship or Outbound Connectors.
  • Error code: We have added a new error code for all the possible error messages in the new wizard. You will now see all errors prepended with a code HCW8***. This change allows for our errors to be easily searched and it allows them to remain searchable even if we change the context of the errors.
  • Humans can read the errors: One of the previous challenges was that we provided a stack trace as the error message instead of just a friendly actionable string. We now keep the stack trace in the logs for anyone who may want that information.
  • New “More info” feature: We added the “More info…” option under the error message. We have recently associated a KB or TechNet article as the most likely solution to EVERY error message the HCW throws. Simply click the “More Info…” link and you will be taken to that solution
  • Access to log Files: You can easily access the HCW log file by clicking on the link that says “Open Log File”. In addition, you will find the log file on the system were you ran the new wizard from by going to “%appdata%\Microsoft\Exchange Hybrid Configuration”. Keep in mind the old location for the logs in the Exchange install directory is not used.
  • Coolest addition: When you run the HCW you will more than likely have the Exchange Admin Center already open, but there is a chance that if you run into an issue you will need to use either your on-premises or Exchange Online PowerShell. The new HCW error experience includes a link that will open the on-premises and/or Exchange Online PowerShell. We already have the credentials you entered into the wizard, so you can seamlessly open PowerShell by using those credentials. In addition, we open the Exchange Online PowerShell with a blue background and the Exchange on-premises PowerShell with a black background so you can easily differentiate the two.

image
Figure 2: Awesome error experience

Top issues solved by the new HCW

About a year ago we came out with a tool to assist you to troubleshoot your hybrid experience. This tool collects and parses the HCW log and provided a link to an article that gave a solution to your issue. The tool has been run thousands of times and has given us great insights into what the top failure points are for the HCW. This telemetry tells us what we need to focus on and allows us to see any failure trends, but in the end we were limited to the information gathered from folks that ran the HCW troubleshooter.

Because we want to be as helpful as possible, we now by default upload the HCW logs to the service when you run the new wizard. Gathering this data will allow us to serve you better by limiting the amount of time it takes for someone in support to find out more about your environment and it allows us to see any trending issues and failure points that we need to address. Even with the limited amount of logs we have collected from the troubleshooter, we have been able to identify the following issues and are addressing them in the new HCW. I think you will see why the log collection is so important to the hybrid team.

Note: If you want to opt out of uploading the Hybrid logs you can do that by using the registry key below on the machine were you are running the HCW from:
1. Navigate to the following location in the registry, create the path if needed:
Exchange 2016: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v16\Update-HybridConfiguration
Exchange 2013: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ExchangeServer\v15\Update-HybridConfiguration
2. Create REG_DWORD “DisableUploadLogs” with value 1

TXT proof string issues

Any time you are required to add an DNS entry you are dealing with a potential for failure. HCW includes a step were you need to add a record to an external DNS to prove to the Azure Authorization Service (known as Microsoft Federation Gateway) that you own the domain. This step may seem trivial but it accounts for ~15% of our HCW failures.

Usually the TXT proof string get messed up in one of two ways:

  • Incorrect string entered: when creating the DNS record to provide domain ownership we often see that the incorrect value was provided. This is in large part due to the way the HCW copied the value. In the previous version of the HCW, when you copied the TXT string, we prepended the words “Domain Proof” so it looked similar to “Domain Proof = t4jnhkjdesy78hrn…”.

Solution: While simple, moving forward we are only going to copy the part of the string that is needed from the “copy link” option in HCW, which should lead to less issues with incorrect TXT strings.

  • Domain name lockouts: The point of providing this TXT string to the external DNS is so the service can validate that you own the domain and federation certificate. After a few failed attempts to validate a domain we lock you out from federating that domain for a few hours. The purpose of this lockout is to prevent a denial of service attack. Often this issue occurs because someone put the wrong value in DNS (see the first bullet), someone created the record and did not wait for replication of the record, or someone created the record in internal (not external) DNS.

Solution: To resolve this we created a new external endpoint in the service that will perform the DNS lookup for the TXT record and only try to federate the domain if the record is correct or if that new service endpoint cannot be found. The logic for this is as follows:

    1. First we try to hit the new external service endpoint and see if the TXT record is resolvable externally and is correct in DNS. If so, we move forward with federating the domain.
    2. If the record is either wrong or not resolvable, we inform you that you need to verify the record and wait for replication.
    3. If the new external TXT validation service is not reachable, we will warn you that we could not verify the TXT record but allow you to continue anyway.

Figure 3 show the new TXT experience you will be getting with the Microsoft Office 365 Hybrid Configuration Wizard.

image
Figure 3: TXT records

Missing Certificate in Wizard

The HCW has a screen that asks you for the “Transport Certificate”. The HCW looks to ensure this certificate is installed on every server that you designated to be part of the Send and Receive Connector Configuration, as shown on the pages in Figure 4.

image
Figure 4: Send and Receive Connector

In order for the certificate to properly display you need to ensure that the following has been completed on all of the servers designated in the wizard pages shown in figure 4:

  • The Certificate must be a third party trusted certificate.
  • The proper names must be on the certificate such as mail.consoto.com or *.contoso.com.
  • The SMTP service must be assigned to the certificate on each of the sending and receiving servers.
  • The certificates must have a private key.

These requirements are nothing new, but if you have a large environment, getting all of this correct on a large number of servers can be a tough task. If even one server was missing any of the requirements, we would fail to show you the certificate. In previous versions of the HCW you were left with a blank screen (see figure 5) which offered no direction or solution.

image
Figure 5: Blank certificate

The Microsoft Office 365 Exchange Hybrid Configuration Wizard experience will not remove the certificate requirements, but it will help you solve the issue. The HCW will now show you a list of certificates that meet the requirements, and it will show you the servers that do not have a proper certificate installed (see figure 6). This will allow you to either remove those servers from the HCW receive and send connector pages, or you can properly install the certificate on those servers.

image
Figure 6: Better certificate error

A more efficient Hybrid experience

One of the things we tried to do with the HCW is ensure that we are performing the various configurations in the most efficient way possible (this is our on-going green effort). A good example of an inefficient task that the HCW previously performed was the Mailbox Replication Service (MRS) enablement process. In the HCW logs collected from the troubleshooter, we could see that this cmdlet was often taking an extremely long time to complete. What we do now, is enable the Migration endpoint on the servers in your environment so that you can start moving mailboxes when the HCW is complete without having to enable the endpoint. One of the cmdlets that we used in the previous version of HCW was get-WebServicesVirtualDirectory. In a larger often geographically dispersed environment this cmdlet could take over eight hours to run. In many cases you would end up getting the following error:

ERROR: Updating hybrid configuration failed with error 'Subtask Configure execution failed: Configuring organization relationship settings. Execution of the Set-WebServicesVirtualDirectory cmdlet had thrown an exception. This may indicate invalid parameters in your Hybrid Configuration settings. Unable to access the configuration system on the remote server. Make sure that the remote server allows remote configuration

Solution: We have resolved this issue in the new HCW using the -ADPropertiesOnly option with Get-WebServicesVirtualDirectory. This changes things so the HCW reads the MRS settings using a local directory call instead of waiting for a response from every server in the environment. This change along with a few others in this area, makes the process take around 15 minutes instead of 8 hours (your deployment times will vary) in these large environments. This is just one example of the type of cleanups we did in the HCW to improve the reliability and speed of the configuration tasks.

Autodiscover issues in HCW

The single most common failure point for the HCW is the inability to retrieve the Federation Information via the Autodiscover call initiated by the Get-FederationInformation cmdlet. The output of this cmdlet is needed in order to create the Organization Relationships so you can do things like free busy sharing. This accounts for nearly 30% of all HCW failures based on the logs collected from the troubleshooter (are you starting to see the importance of these log files?). When looking at the issue there are certain things the wizard cannot directly address. For instance, at times the issues are related to an improperly configured firewall, or someone doesn’t have a third-party certificate for IIS on the Exchange servers. However, a good portion of you have had things configured correctly and still we failed to complete the Get-FederationInformation cmdlet.

One of the things this cmdlet does is use DNS settings from the server you are connected to in order to resolve the Autodiscover endpoint and retrieve the federation information. Many customers do not have a DNS record created for Autodiscover internally since there is often no need for this. The internal Outlook client will use the Service Connection Point to find the Autodiscover endpoint so there is no need for this from an outlook standpoint, however the Get-FederationInformation cmdlet does not use the Service Connection Point. Therefore, if there is no forwarding configured for this zone in DNS the Get-FederationInformation cmdlet will be unable to resolve the autodiscover endpoint and the HCW will fail.

Solution: To resolve this issue, we have added a new method for checking for the federation information. We still try to use local DNS first and if it fails we then will try to hit an external service to see if we can get the federation information externally. This will ensure that if you have Autodiscover published properly externally the HCW will complete as expected. See figure 7 for details:

image
Figure 7: Get-FedInfo

OAuth Integration

Another common failure point is the OAuth portion of the HCW. The HCW today shows you an option to configure OAuth if you are Exchange 2013 native, but not if you coexist with previous versions of the Exchange. OAuth is required for some features today, such as cross premises discovery and automatic archive retention. Because of that, we want to ensure that OAuth is by default configured so all of the Hybrid features work when you complete the HCW.

One downside to this is that the current OAuth configuration experience previously had a high rate of failure. We have gone through and fixed a good portion of the experience and we have also added logic to the new HCW so that if the OAuth portion fails we will disable the OAUTH configuration by disabling the IntraOrganizationConnector and let you know we disabled it and give you remediation steps. This will ensure that a failed OAuth configuration does not prevent other hybrid features such as cross premises Free Busy from working.

Many more…

The above are just a few of the issues that have been addressed with the latest version of the HCW. There are many example that we could have used such as a couple of issues we addressed with mail flow, Multi-Forest deployments, and many more. In this latest version we strived for feature parity, while improvement the failure rate, and allowing for future innovation. We think we have hit the mark.

Running the HCW

Now that we have covered some of the new features and benefits of running the Microsoft Office 365 Exchange Hybrid Configuration Wizard, let’s take a guided tour. We are not going to go through each option in depth as most of them have not changed from Exchange 2013.

How to find the new HCW

We have not moved the location of the HCW in the Exchange Admin Center, the entry point look and feel is consistent with previous version of the Exchange 2013 HCW. The only difference is that instead of calling local code when you click “configure” or “modify” in the hybrid node of EAC, we now initiate the click once application. Figure 8 shows the entry point.

image
Figure 8: Entry Point

HCW Landing Page

The next screen you will see is the HCW landing page, which is a page that serves two purposes. The first and most important purpose is that we can redirect a small subset of customers (based on pre-defined criteria) to an alternate HCW experience. As discussed previously in this blog, this allows us to pilot new features without affecting the production HCW experience. The second benefit of this landing page is that it allows us to provide a proper error message if the browser version, popup blockers, etc. are not configured in a way that would support the HCW. When you are on the landing page you will select the “click here” option to download the HCW. See figure 9 for a view of the landing page.

image
Figure 9: Landing page

Welcome Screen

The Welcome screen (see figure 10) will provide you with a link that will inform you about what a Hybrid configuration is along with an additional link at the bottom that explains what the HCW application is going to do. The Second link is at the bottom-left of the screen and says What does this application do? On this screen you will simply click next to continue.

image
Figure 10: Welcome screen

Server Detection Page

The next screen allows you to choose which server you will use to perform your hybrid configuration. This is the machine that the HCW will remote PowerShell into in order to perform all of the hybrid configuration tasks.

The selected server must be running a version of Exchange that is within two releases of our currently released Cumulative update. This means at launch the new HCW will work if you are connecting to an Exchange 2013 CU8 or newer version of Exchange. However, when Exchange 2013 CU11 releases you will see that we will no longer allow you to run the new HCW from Exchange 2013 CU8 and will require a minimum of CU9. Keep in mind that even though the HCW will allow you to proceed if you are two versions older than the current release (n-2), we actually only support going one version back for Hybrid (n-1).

If for you were to select a server that is running an unsupported version, the HCW will provide you with an error stating that you are not running a supported version. In addition, the HCW will provide you with a list of servers that are running a supported version (if any exist).

image
Figure 11: Unsupported version

The HCW will try to select the best server to perform the configuration tasks from using the following logic:

  1. First we look to see if the server we are on is running the latest supported version of Exchange in the organization.
  2. Next we look to see if there is an existing Exchange server in the site running the latest supported version of Exchange.
  3. Finally, we attempt to connect to an out of site Exchange server (typically in a different geographical location) running the latest supported version.

If you do not like the server selection the HCW made via the above mentioned detection logic you can manually specify the server name that you want to connect to. You can use the short name (ServerName) or the long name (ServerName.Contoso.com) in the provided box to select the appropriate server running the supported version of Exchange.

The last option on this page allows you to select the tenant location. For most the tenant location is simply “Microsoft Office 365” but if your Office 365 is operated by 21 Vianet, you can also use the “21 Vianet” option.

image
Figure 12: Server detection

Credentials page

The main improvement on this page is the fact that we do not force you to type in your on-premises credentials. However, if you are not signed in as the user with the Organization Management Role you can manually override this behavior and provide separate credentials.

image
Figure 13: Credentials page

Connection Status page

We will then show you the connection status window, which will let you know if improper credentials were provided on the previous step. Usually this is a pretty uneventful window and you just click next.

image
Figure 14: Connection status

Mail flow options page

The rest of the questions in the HCW from this page on are related to the mail flow options. The experience and windows you see from this point forward may vary depending on the options selected. For more information on the mail flow options you have please review this article.

image
Figure 15: Mail flow options

Receive and Send Connector Configuration

This page of the wizard allows you to select the Exchange 2013 and/or Exchange 2016 servers that you intend on configuring for sending and receiving mail for your on-premises environment. You can have a mix of 2013 and 2016 servers selected. We do not allow you to choose Exchange 2010 servers from these menus.

image
Figure 16: Receive Connector

image
Figure 17: Send Connector

Certificate selection page

We described the enhancements to this certificate selection page previously in the blog, we covered the experience you will get if a valid certificate cannot be found on any one of the Sending and Receiving servers selected on the previous page (figure 16 and figure 17). This certificate page is what you should expect to see when the certificates are installed properly on all servers. In this case you will get a list of certificates that are meeting all of the requirements and installed on all of the selected servers. In most cases the list includes only one certificate that meets the list of requirements.

image
Figure 18: Certificate

FQDN for Mail Flow

The final question in the wizard will allow the HCW to properly configure the smart host settings on the outbound connector in Exchange Online. You will usually provide the FQDN that matches your MX record in this window.

image
Figure 19: FQDN

Update page

Up to this point in the HCW there has been no modification made to your on-premises or Exchange Online environment. When you select the update option on this page we will start making the modification based on the answers to the questions you provided on the previous screen. Similar to the old version of the HCW we will store those answer in the local Active Directory in a configuration object known as your desired state. We will then read from that configuration object to make the modification.

image
Figure 20: Update

Wrapping this up

The Exchange hybrid configuration process is something that has evolved rapidly over the past few years. We have done a lot over that time to simplify these complex configurations. With this latest version we have continued that trend by adding flexibility for innovation, more HCW stability, better HCW performance, a cleaner configuration experience, and (if needed) a proper error experience. However, our tools and services are built for you so let us know what you think, when you try out the wizard send us feedback through the feedback widget in the HCW. Just look for the “give feedback” link on the bottom of the page in the wizard and please rate the experience.

The Exchange Hybrid Team

Ask The Perf Guy: What’s The Story With Hyperthreading and Virtualization?

$
0
0

There’s been a fair amount of confusion amongst customers and partners lately about the right way to think about hyperthreading when virtualizing Exchange. Hopefully I can clear up that confusion very quickly.

We’ve had relatively strong guidance in recent versions of Exchange that hyperthreading should be disabled. This guidance is specific to physical server deployments, not virtualized deployments. The reasoning for strongly recommending that hyperthreading be disabled on physical deployments can really be summarized in 2 different points:

  • The increase in logical processor count at the OS level due to enabling hyperthreading results in increased memory consumption (due to various algorithms that allocate memory heaps based on core count), and in some cases also results in increased CPU consumption or other scalability issues due to high thread counts and lock contention.
  • The increased CPU throughput associated with hyperthreading is non-deterministic and difficult to measure, leading to capacity planning challenges.

The first point is really the largest concern, and in a virtual deployment, it is a non-issue with regard to configuration of hyperthreading. The guest VMs do not see the logical processors presented to the host, so they see no difference in processor count when hyperthreading is turned on or off. Where this concern can become an issue for guest VMs is in the number of virtual CPUs presented to the VM. Don’t allocate more virtual CPUs to your Exchange server VMs that are necessary based on sizing calculations. If you allocate extra virtual CPUs, you can run into the same class of issues associated with hyperthreading on physical deployments.

In summary:

  • If you have a physical deployment, turn off hyperthreading.
  • If you have a virtual deployment, you can enable hyperthreading (best to follow the recommendation of your hypervisor vendor), and:
    • Don’t allocate extra virtual CPUs to Exchange server guest VMs.
    • Don’t use the extra logical CPUs exposed to the host for sizing/capacity calculations (see the hyperthreading guidance at http://aka.ms/e2013sizing for further details on this).

Jeff Mealiffe
Principal PM Manager
Office 365 Customer Experience

Viewing all 607 articles
Browse latest View live




Latest Images