Quantcast
Channel: You Had Me At EHLO…
Viewing all 607 articles
Browse latest View live

Those pesky lazy indices

0
0

In Exchange 2013 there are indices within a given mailbox database.  The indices are created, maintained, and deleted by the Information Store Worker Process associated with a given database.  These indices are not to be confused with the Exchange content indexes that are built via the Search Foundation engine as they are completely different.

Within an Exchange database there can exist any combination of primary, secondary, and lazy indices There is exactly one primary index and one secondary index on the messages table.  For example, a message table exists where a primary index is created on DocumentID and a secondary index created on FolderID, IsHidden, and MessageID.  Additionally, other lazy indices may be created that reflect client views, search folders, and even views of search folders.  Exchange maintains these indices through two methods, an eager method and a lazy method. Primary and secondary indices are always maintained eagerly. Lazy indices are maintained through the lazy indexing process (although some may be maintained eagerly). There are many lazy indices per mailbox and usually multiple per folder. Confused yet? Let us see if we can explain further…

The eager method says that when an object is inserted into the table the indices must be immediately updated.  In the previous example an insert into the mailbox would require updating the primary index and then all secondary indices created against the same mailbox.  This results in a random write being issued on each insertion.  If a mailbox had 10 indices this would result in 10 random writes.  The performance impacts could be significant depending on the structure of the indices.  In some cases, the lazy indices exist but are actually not utilized.  This results in update cycles being incurred for data that may actually not be utilized.  There do exist certain indices where immediate updating is required – this is why the eager method exists.

The lazy method is often utilized to mitigate the performance impact that indexing could cause.  When an insertion occurs to the folder an entry is created in a lazy indexing maintenance table with information on the lazy indices that require updating.  In this example, two random writes are incurred regardless of the number of lazy indices that require updating.  Subsequently when an index is accessed, before returning the results of that index, we apply the maintenance records found within the table ensuring the index is up to date.  Three major benefits are derived from this method:

  • Less random writes are incurred on indices.
  • If an index exists but is never used we expend no random writes updating it.
  • If multiple records are inserted at the same time when bringing an index current we can derive some write coalescing.

Issue

Customers have noted that on versions of Exchange 2013 prior to Cumulative Update 6 the following errors are recorded in the Application log and are seen resulting in Information Store Worker Process termination and subsequently database failover.  The failovers and terminations may effect single or multiple databases and often result in databases failing over multiple times a day.

Source: MSExchangeIS
Event ID: 1001
Level: Error
Description:
Microsoft Exchange Server Information Store has encountered an internal logic error. Internal error text is (Unable to apply maintenance GetNonKeyColumnValuesForPrimaryKey-norow, index corruption?) with a call stack of (   at Microsoft.Exchange.Server.Storage.Common.ErrorHelper.AssertRetail(Boolean assertCondition, String message)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.HandleIndexCorruptionInternal(Context context, Boolean allowFriendlyCrash, String maintenanceOperation, Nullable`1 messageDocumentId, Exception exception)
...(remaining stack redacted)
   at EcPoolSessionDoRpc_Managed(_RPC_ASYNC_STATE* pAsyncState, Void* cpxh, UInt32 ulSessionHandle, UInt32* pulFlags, UInt32 cbIn, Byte* rgbIn, UInt32* pcbOut, Byte** ppbOut, UInt32 cbAuxIn, Byte* rgbAuxIn, UInt32* pcbAuxOut, Byte** ppbAuxOut)).

Source MSExchangeIS
Event ID: 1002
Level: Error
Description:
Unhandled exception (Microsoft.Exchange.Diagnostics.ExAssertException: ASSERT: Unable to apply maintenance GetNonKeyColumnValuesForPrimaryKey-norow, index corruption?
   at Microsoft.Exchange.Diagnostics.ExAssert.AssertInternal(String formatString, Object[] parameters)
   at Microsoft.Exchange.Server.Storage.Common.ErrorHelper.AssertRetail(Boolean assertCondition, String message)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.HandleIndexCorruptionInternal(Context context, Boolean allowFriendlyCrash, String maintenanceOperation, Nullable`1 messageDocumentId, Exception exception)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.HandleIndexCorruption(Context context, Boolean allowFriendlyCrash, String maintenanceOperation, Nullable`1 messageDocumentId, Exception exception)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.GetNonKeyColumnValuesForPrimaryKey(Context context, Object[] primaryKeyValues)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.DoMaintenanceDelete(Context context, Byte[] propertyBlob)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.ApplyMaintenance(Context context, LogicalOperation operation, Byte[] propertyBlob)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.SaveOrApplyMaintenanceRecord(Context context, MaintenanceRecordData maintenanceRecord, Boolean allowDeferredMaintenanceMode)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.BuildDeleteRecords(Context context, IColumnValueBag updatedPropBag, Int64& firstUpdateRecord)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.BuildUpdateRecords(Context context, IColumnValueBag updatedPropBag, Int64& firstUpdateRecord)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndex.LogUpdate(Context context, IColumnValueBag updatedPropBag, LogicalOperation operation)
   at Microsoft.Exchange.Server.Storage.LazyIndexing.LogicalIndexCache.TrackIndexUpdate(Context context, Mailbox mailbox, ExchangeId folderId, LogicalIndexType indexType, LogicalOperation operation, IColumnValueBag updatedPropBag)
...(remaining stack redacted)
   at Microsoft.Exchange.Common.IL.ILUtil.DoTryFilterCatch[T](TryDelegate tryDelegate, GenericFilterDelegate filterDelegate, GenericCatchDelegate catchDelegate, T state)).

Source: MSExchange Common
Event ID: 4999
Level: Error
Description:
Watson report about to be sent for process id: 9112, with parameters: E12, c-RTL-AMD64, 15.00.0913.022, M.E.Store.Worker, M.E.S.Storage.LazyIndexing, M.E.S.S.L.LogicalIndex.HandleIndexCorruptionInternal, M.E.Diagnostics.ExAssertException, a762, 15.00.0913.000.
ErrorReportingEnabled: True

Why does this issue occur?

Prior to CU6 it was possible that certain index maintenance operations would overlap each other. This would result in an inconsistency between an eager index update and a lazy index update that would happen later. More specifically, we missed outputting an update that was needed later which gets the index into a corrupted state. This would essentially mark the index as corrupted when the lazy index operation could not be applied later, causing the crash, and requiring the index to be rebuilt.

How did Microsoft find this bug?

Although the result of the index corruption is a failover there was no availability impact in the service. The failovers successfully fixed the indices and no client impact is occurred. (It should be noted that the same is reported in on-premises installations). Thanks to customers that have enabled automatic error reporting an uptick in reports related to lazy indexing was noticed. This allowed our development teams to evaluate the code in question and issue a fix.

Why does the Information Store Worker Process terminate due to lazy indexing?

The lazy indexing maintenance process encounters issues arising to the maintenance of the indices.  When inserting a record into an index we assume the record should not already be in the index.  When removing or updating a record within the index we expect that the record already exists.  Due to an issue in how the indices were previously built these constraints are violated.  Our method of handling this is to terminate the Information Store Worker Process, which results in a database copy failover.  The index itself is also deleted and then rebuilt the next time the index is accessed.  Although a failover occurred the high availability framework should quickly restore access to the database and the end user should not be impacted.  The corrupted index is self healed.  Indices and index information is logged via transaction logging and subsequently replicated to other database copies if a Database Availability Group is utilized.

Does rebuilding the content index or reseeding the content indices correct this issue?

The status of the content index catalogs has no impact on this issue.  They are two separate indexing concepts unrelated to each other in the context of this issue.

Does reseeding the database copies or removing /adding the database copies correct this issue?

No. The indices are stored within the database and any corrupted indices would be reseeded with the database.

How do I correct the lazy indexing failures and prevent database failovers?

The majority of incorrect indices occur with Exchange 2013 Cumulative Update 5.  The issue was first identified through automatic error reporting in Exchange 2013 and subsequently identified in Office 365.  It was then fixed in a post Exchange 2013 CU5 build deployed to Office 365 where the incidents of index corruption decreased. The fix has been incorporated into Exchange 2013 Cumulative Update 6.  Customers should upgrade to Exchange 2013 CU6 to correct the index creation issue and allow future index operations to proceed successfully.   

Does Exchange 2013 Cumulative Update 6 prevent a lazy indexing failure?

No.  There are certain reasons an index may be considered corrupted.  Isolated index corruption with subsequent self healing may occur.  It is also important to note that indices could have been created in Exchange 2013 CU5 that are corrupted.  When these indices are accessed on Exchange 2013 CU6 and newer a database failover may result as the indices are still corrupted.  Exchange 2013 CU6 corrects the building of the initial indices which should decrease the frequency of lazy indexing resulting in database failovers. 

Can I expect failovers for the foreseeable future?

In the short term after application of Exchange 2013 CU6 customers may continue to experience failovers if a corrupted index is accessed. The Information Store process automatically cleans up indices that are not accessed after 30 days. The issue should self correct either though failover and immediate cleanup or after the indices are aged out.

Can I do something today to correct the corrupted indices?

There are two interventions administrators can utilize to discard corrupted indices. The first is to move the mailboxes to a different database. The move process discards indices during the move. The second is to execute a New-MailboxRepairRequest with the CorruptionType “DropAllLazyIndices” parameter against the mailbox. The New-MailboxRepairRequest effectively sets the index age timeout for a given mailbox to 0. The repair process will render the mailbox inaccessible while the repair is in progress and could have significant performance impacts on the server. WE DO NOT recommend either of these options since they would have to be run in bulk against all mailboxes whether or not they have corrupted indices. There is no proactive method to scan for index corruption and identify mailboxes to target the move or request against.

Customers that have opened cases with support have reported a significant decrease in the number of failovers associated with lazy indexing after the application of Exchange 2013 CU6.  Failovers will continue until all corrupted indices have been accessed, deleted, and subsequently rebuilt.  Customers who experience this issue are advised to test and deploy Exchange 2013 CU6 as soon as possible. If upgrading is not possible the database failovers may continue with no other negative side effects noted.

Tim McMichael
Senior Support Escalation Engineer


Be aware: October 26 2014 Russian time zone changes and Exchange

0
0

We wanted to give you a heads up that depending on the version of Exchange you are running, there might be some impact to either names of time zones that are changing on October 26, or the way that actual meetings are displayed in affected time zones. Customers using our newer versions of Exchange, 2010 and 2013, can expect meetings to appear on calendars correctly (provided underlying operating systems have been updated). Customers who are running Exchange 2007 might see meetings displayed at wrong times.

We are committed to correct these inconsistencies in our November release wave.

Please see KB article 3004235 for more information.

Nino Bilic

Introducing Microsoft Ignite – meet us in Chicago

0
0

This morning on The Official Microsoft Blog, we revealed more details about our unified technology event for event for enterprises in May. The event will be known as Microsoft Ignite. If you are one of the many MEC conference alumni this is the conference for you. Microsoft Ignite is for Exchange customers using Office 365 or Exchange Server on-premises. Register now to reserve your spot and we will see you in Chicago on May 4th!

Shape the event | Join the YamJam

We are committed to making Microsoft Ignite an incredible and valuable event for all of us who are passionate about Exchange, Office, SharePoint, Lync, Project and Visio. We want your feedback to help shape plans for this event. Join us for a YamJam on the Office 365 Technical Network on Tuesday, October 21st 9:00 am – 10:00 am PDT to ask questions about the event and to provide feedback on what you want to see there. For those unfamiliar with a YamJam, it is similar to a “TweetJam” on Twitter or an “Ask Me Anything (AMA)” on Reddit, except it takes place on Yammer.

How to participate:

  1. Request access to the Office 365 Technical Network.
  2. Join the Ignite Event group. You can find it by using the Browse Groups function or through the search bar.
  3. Log in at 9:00 a.m. PDT on Tuesday, October 21st to ask questions and provide feedback on what you want to see from the Microsoft Office Division at the conference.

image

Come get your Calculator Updates!

0
0

Today, we released updated versions of both the Exchange 2010 Server Role Requirements Calculator and the Exchange 2013 Server Role Requirements Calculator.

The Exchange 2010 version is an incremental update and only includes minor bug fixes. You can view what changes have been made, or download the update directly.

In addition to bug fixes, the Exchange 2013 version, on the other hand, includes new functionality.  In particular, the ability to define how many AutoReseed volumes you would like in your design and mailbox space modeling. You can view what changes have been made, or download the update directly.

Mailbox space modeling provides a visual graph that indicates the expected amount of time it will take to consume the send/receive prohibit quota assuming the message profile remains constant.  As you can see from the example below, if I start with a 2GB mailbox with a 200 message profile and allocate a 10GB quota (and assuming no deletes), I expect to consume that quota in roughly 22 months.  Hopefully, this feature will allow you to plan out storage allocation more appropriately moving forward.

image

Modeling

As always, we welcome your feedback.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

OAB Improvements in Exchange 2013 Cumulative Update 7

0
0

Note: Cumulative Update 7 (CU7) for Exchange Server 2013 will be released soonTM.

Back in May, I discussed the changes we introduced in Exchange 2013 Cumulative Update 5. Specifically with CU5 and later, an OAB can only be assigned (or linked) to a single OAB generation mailbox. This architectural change addressed two deficiencies in the Exchange 2013 OAB architecture:

  1. Enabled administrators to define where an OAB is generated.
  2. Removed the capability to have multiple instances of the same OAB.

However, this change did not improve user accessibility in a distributed messaging environment. For example:

CU5 Behavior

Let’s say the blue user, whom we shall now call Scott Schnoll, has his mailbox located in Redmond. Due to a clause in his work contract, Scott needs to work out of the Portland office for the next six months.

In order to keep Scott’s address book up to date, Outlook will trigger an OAB download every 24 hours (based on the time it was last successfully downloaded), connecting to the Redmond CAS infrastructure which proxies the request to the Redmond Mailbox server that hosts the OAB generation mailbox that generates the Redmond OAB.

Having Scott’s OAB files downloaded across the WAN is not an optimal experience. Obviously, the administrator could point Scott’s mailbox to the Portland OAB instance; however, that would require Scott’s client to undergo a full OAB download. Unfortunately, prior to CU7, this is the only way to mitigate this scenario.

Shadow Distribution in Cumulative Update 7

Exchange 2013 Cumulative Update 7 introduces the capability for an OAB generation mailbox to host a shadow copy of another OAB. This functionality enables additional Mailbox servers to be an endpoint for OAB download requests. By default, this feature is disabled and is configurable per OAB.

CU7 Behavior

Referring back to our previous example, once shadow distribution is enabled for the Redmond OAB, Scott’s Outlook client, via the Autodiscover response his client gets from the Portland CAS (which as we know is actually going to be generated on the mailbox server in the Redmond site, as that’s where Scott’s mailbox is), will now access the Redmond OAB using a URL resolving to the Portland CAS infrastructure (https://pmail.contoso.com/oab/redmond oab guid/); this is accomplished by Autodiscover leveraging the X-SourceCafeHeader value specified in the HTTP proxy request. The first time an attempt is made to access this OAB will result in a 404 response as the OAB files do not exist on the Portland Mailbox server that hosts the OAB generation mailbox, OAB Mailbox 1. This event invokes the OABRequestHandler, which initiates an asynchronous transfer, via BITS, of the Redmond OAB files to the Portland MBX server hosting the OAB generation mailbox. During the next attempt to synchronize the OAB, Scott’s Outlook client is able to download the necessary OAB files locally.

How do I enable shadow distribution?

The GlobalWebDistributionEnabled and VirtualDirectoriesproperties of an OAB are still used by Autodiscover to determine which CAS OAB virtual directories are eligible candidates for distributing the OAB. Given the architecture in Exchange 2013, any CAS can proxy an incoming OAB request to the right location, therefore, with CU7 and later, the recommendation is to enable global web distribution for all OABs hosted on Exchange 2013.

Set-OfflineAddressBook <E15OAB> -VirtualDirectories $null
 
Set-OfflineAddressBook <E15OAB> -GlobalWebDistributionEnabled $true

Prior to enabling shadow distribution, you should deploy an OAB generation mailbox in each Active Directory site where Exchange 2013 infrastructure is deployed (assuming CU7 or later is deployed in each site).

New-Mailbox -Arbitration -Name "OAB Mailbox 3" -Database DB4 -UserPrincipalName oabmbx3@contoso.com –DisplayName "OAB Mailbox 3"
 
Set-Mailbox "OAB Mailbox 3" –Arbitration –OABGen $true

Once global distribution is enabled and OAB generation mailboxes are deployed, you can then enable shadow distribution on a per-OAB basis:

Set-OfflineAddressBook "Redmond OAB" -ShadowMailboxDistributionEnabled $true

How do I disable shadow distribution for an OAB?

You can disable shadow distribution on a per-OAB basis:

Set-OfflineAddressBook "Redmond OAB" -ShadowMailboxDistributionEnabled $false

Does accessing a shadow copy trigger a full OAB download?

As discussed in OAB Improvements in Exchange 2013 Cumulative Update 5, the reason we moved to having a single OAB generation mailbox generate an OAB was to ensure that the OAB instance remained unique within the organization. Prior to CU5, all OAB generation mailboxes generated their own unique instances of the OABs, which resulted in full downloads any time a client was proxied to a different OAB generation mailbox.

The shadow copy is only distributed on-demand and is an exact duplicate of the OAB that is generated by the “master” OAB generation mailbox. As a result, an Outlook client will not be forced to perform a full download upon accessing the shadow copy files. The OABv4 conditions in Using Offline Address Books describes the conditions that can trigger a full download of an OAB.

How does a shadow distributed OAB get updated?

As soon as a new OAB is generated and published on the “master” Mailbox server, all the Mailbox servers hosting shadow copies will stop distributing their now-outdated copies. The first user who requests access to the OAB will trigger a full synchronization of the OAB to the shadow copy.

What happens if I enable shadow distribution for an OAB, but there is no OAB generation mailbox in the site where the user is located?

When shadow distribution is enabled, Autodiscover will return the OAB URL for the site from which the user request initiated. If there is no OAB generation mailbox within that site, then CAS will simply proxy the request back to the Mailbox server hosting the OAB generation mailbox that is responsible for generating the OAB.

Summary

Shadow distribution completes our work on improving the OAB capabilities in the on-premises product and hopefully satisfies the requests from our customers that deploy distributed messaging environments. As always, we welcome your feedback.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

On-Premises Legacy Public Folder Coexistence for Exchange 2013 Cumulative Update 7 and Beyond

0
0

What are we talking about today?

In Exchange 2013 CU5 (yes 5, V, cinco, fem, and cinque) we started implementing changes to how Legacy Public Folder endpoint discovery will be performed by Outlook (for Windows) in the future. This work continues behind the scenes and will be completed with the release of Cumulative Update 7. This becomes important in on-premises Exchange coexistence environments where some or all of your on-premises user mailboxes have been moved to Exchange 2013 and your Public Folder infrastructure is still on Exchange 2007 or Exchange 2010. Anyone whom has gone through the Legacy Public Folder hybrid configuration steps for Exchange Online will recognize what we are about to go through for the on-premises edition of Exchange 2013.

Why should I care about this?

Prior to CU7, Exchange 2013 mailboxes using the Outlook client were proxied to the legacy mailbox server hosting the Public Folder being accessed either via RPC/TCP or RPC/HTTP depending on the client’s location, the connectivity model being used, and the configuration on the legacy Exchange servers.

With the introduction of MAPI/HTTP in Exchange 2013 SP1, we identified an issue where clients could not always access the legacy Public Folder environment after moving to the MAPI/HTTP protocol.

An analysis of this behavior led us to understand that a combination of RPC Client Access code and older code within the Outlook client enabled the client to be redirected to the legacy Public Folder store under certain circumstances. While you may be thinking this is great news, it is not the desired state – both Exchange and Outlook need to utilize a common pathway for directing clients to connect to mailbox and Public Folder data. That common pathway is Autodiscover.

In the future, both Exchange and Outlook will remove the old code that enabled the older redirection logic. As a result new configuration steps exist which customers should undertake to coexist with legacy Public Folders and support connectivity with Outlook (for Windows) clients whose mailboxes reside on Exchange 2013, regardless of the connectivity protocol (RPC/HTTP or MAPI/HTTP) in use by their clients.

We are providing you with this information in advance of CU7’s release (No, we’re not going to answer when it will be released other than ‘when it is ready.’J) so you may prepare your environments for the new legacy public folder coexistence method. All of the commands discussed here were available starting in CU5 so you may configure your environment in advance of deploying CU7 if you would like to.

Give me the short version. What do I have to do?

The configuration steps for enabling this new discovery method have been published in the following article.

There are two new commands you will need to execute prior to installing CU7 or just after (we recommend before) to ensure Exchange 2013 CU7 and later will provide Outlook the information it needs to properly discover legacy public folders.

  • From a CU5 or later Exchange 2013 Server: Use the Set-OrganizationConfig cmdlet to assign the legacy public folder discovery mailbox(es) to the RemotePublicFolderMailboxes value of the organization.
  • From a CU5 or later Exchange 2013 Server: Use the Set-OrganizationConfig cmdlet to set the PublicFoldersEnabled attribute of your Exchange organization from Local to Remote.

With the above settings configured Exchange 2013 will begin returning a new section in Autodiscover responses to Exchange 2013 mailbox users similar to the following and using the new coexistence code paths;

<PublicFolderInformation>
<SmtpAddress>PFDiscovery-001@contoso.com</SmtpAddress>
</PublicFolderInformation>

With this information Outlook will then perform a second Autodiscover request using the provided SMTP address. This SMTP address is for a legacy Public Folder discovery mailbox that resides on an Exchange 2007 or Exchange 2010 mailbox server that also serves a public folder database (PFDB). In the above example Outlook would perform an Autodiscover request for PFDiscovery-001@contoso.com to discover the connection endpoint (RPC, or RPC/HTTPS) to use when the Exchange 2013 user is accessing your organization’s legacy Public Folder. Outlook is not logging on as this mailbox, nor is it actively using this mailbox to access the legacy public folder content. The mailbox strictly exists to be able to perform an Autodiscover request/response such that Outlook receives a valid connection endpoint for your legacy Public Folders.

Without these new settings being configured, Exchange 2013 will continue to use the old code paths which will be removed at some point in the future. It is important that all on-premises Exchange 2013 organizations fully configure their environment to ensure uninterrupted legacy Public Folder access in the future.

I like pictures and examples. Is there a longer version?

Yes, we have you covered. Let us go through configuring an Exchange 2013 environment for Exchange 2010 legacy public folder access as it is the more complicated of the two scenarios to configure. If you need to configure Exchange 2007 there are fewer steps involved and you can reference the TechNet documentation.

  1. Identify the Public Folder database(s) you need users to be able to connect to initially by examining the PublicFolderDatabase attribute of your Exchange 2013 mailbox databases. This attribute defines the default legacy public folder database for each Exchange 2013 mailbox database.

    Below we can see there are two legacy public folder databases used as defaults for our Exchange 2013 databases.

    pf1

  2. Add the Client Access Server role if the PFDB resides on an Exchange 2010 Mailbox Server without CAS installed. The addition of the CAS role will ensure public folder replica referrals happen appropriately if a folder a user is accessing does not have a local replica in the PFDB. If the PFDB resides on a server with both the Mailbox and Client Access Server roles (Whether Hub Transport or UM are installed are irrelevant here), you can skip this step and go to step 3.
  3. After installing the CAS role, if it was necessary, configure the role as you would any other CAS in this AD site with the proper virtual directory and other settings to ensure Autodiscover results for clients are not impacted by a bunch of default virtual directory values. You do not have to add this new CAS role to your load balancer pool if you do not want to. If you did not have to install the CAS role as it was already installed on the PFDB server, please skip to step 3.
  4. Create a new empty mailbox database on the Mailbox Server containing the PFDB to be accessed. If this mailbox server is a member of a DAG, please do not create additional copies of this particular mailbox database. You can safely leave this mailbox database as a single copy.

    Note: If you are unable to create an additional mailbox database in this step due to using Exchange Server Standard Edition, you can utilize an existing mailbox database in this case.

  5. Skip this step if you are re-using another mailbox database due to Exchange Server Standard Edition limitations. Using the Set-MailboxDatabase cmdlet, exclude this new empty mailbox database from automatic mailbox provisioning by setting the IsExcludedFromProvisioning flag to $True.
  6. Skip this step if you are re-using another mailbox database due to Exchange Server Standard Edition limitations. Using the Set-MailboxDatabase cmdlet, set the RPCClientAccessServer value of the new empty mailbox database to the FQDN of the Mailbox Server holding the public folder database to be accessed. The RPCClientAccessServer value is only used for RPC/TCP connectivity and this does not mean a new name is added to your SSL certificate as HTTPS will not be used here (see Item #3 here for explanation).
  7. Create a new mailbox inside the empty mailbox database you just created on the server holding your PFDB. This will be known as a Public Folder discovery mailbox. This mailbox is not accessed in any way. This mailbox is used as a target to retrieve connection settings via Autodiscover and nothing more. A naming convention such as PFDiscovery-<ServerName> or PFDiscovery-<###> is helpful to identify these mailboxes in the future. This mailbox must have an SMTP address which can be used by Autodiscover internally, and also used externally if you have external users requiring access to legacy public folders. If you are re-using another mailbox database due to Exchange Server Standard Edition limitations, the mailbox will reside in an existing database.

    Below you can see the mailbox we created and its SMTP address.

    pf2

  8. Using the Set-Mailbox cmdlet hide your new discovery mailbox(es) from address lists by setting the HiddenFromAddressListsEnabled parameter to $True.

    pf3

  9. Repeat steps 1-7 for additional Public Folder databases if you would like to distribute client connections across more than one PFDB.
  10. Prior to running the next two commands we look at the current organization configuration in its default state.

    pf4

  11. From a CU5 or higher Exchange 2013 Server: Using the Set-OrganizationConfig cmdlet, assign the PF discovery mailbox(es) to the RemotePublicFolderMailboxes value of the organization.
  12. From a CU5 or later Exchange 2013 Server: Using the Set-OrganizationConfig cmdlet, set the PublicFoldersEnabled attribute of your Exchange organization to Remote.

    Running our Set-OrganizationConfig commands.

    pf5

    Note: If you need to add multiple mailboxes you can use this example PowerShell command format.

    Set-OrganizationConfig -RemotePublicFolderMailboxes "PFDiscovery-001", "PFDiscovery-002"

    Validating the changes took place.

    pf6

  13. After you configure these two new settings and a few caches expire you should be able to validate you are now getting the <PublicFolderInformation> section back in the initial Autodiscover response for users with Exchange 2013 mailboxes.

    pf7

  14. If you were to run your favorite HTTP proxy/logging tool while Outlook is running you would eventually see another Autodiscover query/response for in our example the mailbox PFDiscovery-010@corp.contoso.com returned above. This is when Outlook learns where and how to connect to your legacy Public Folder infrastructure.

    pf8

  15. Confirm via Outlook you can connect to the legacy Public Folder hierarchy. Below are examples of using MAPI/HTTP for the primary mailbox and either RPC/HTTP or RPC/TCP for the legacy Public Folders. In our example lab the Exchange 2010 server named CON-E2K10-002 holds the PFDB being accessed. This public folder database was accessed because it is the default public folder database of the Exchange 2013 mailbox database the user resides in. If you are not yet using MAPI/HTTP in your Exchange 2013 environment, then the screenshots below would look the same except for replacing “HTTP” with “RPC/TCP.”

MAPI/HTTP for the Primary mailbox and RPC/HTTP for legacy Public Folders

pf9

MAPI/HTTP for the Primary mailbox and RPC/TCP for legacy Public Folders

pf11

FAQ

Q: We're running Exchange 2013 SP1 (or earlier) and plan on upgrading directly to CU7. Our Exchange 2013 users seem to be accessing legacy Public Folders without issue today. Does this mean their legacy Public Folder access will break when CU7 is applied?

A: CU7 has logic that will only use the new code paths if RemotePublicFolderMailboxes is not empty and the PublicFoldersEnabled is set to ‘Remote’. If you were to upgrade directly from an SP1 or earlier to CU7, then Exchange will use the old code paths until you complete the necessary configuration steps to ensure users are not interrupted post-upgrade.

Q: Does Outlook Anywhere need to be enabled in the legacy (2007/2010) environment for this to work if we do not currently provide external access to Exchange via OA?

A: No, Outlook Anywhere does not need to be enabled if the only connectivity method you need to provide to legacy Exchange versions is RPC for internal users or external users connecting via a VPN tunnel. If OA is disabled in the 2007/2010 environment, then the Autodiscover results will inform Outlook to use RPC via the EXCH Outlook Provider instead of RPC/HTTP via the EXPR Outlook Provider to connect to the public folder database.

Q: Are there any specific Outlook versions/builds required for this to work?

A: As a general rule we always suggest keeping Outlook up to date with both service packs and public updates, and we maintain that suggestion here. As long as you are running a version of Outlook 2010 or 2013 supported by Office 365 this feature should work. If this guidance ever changes, we will update necessary documentation.

Q: How does Exchange 2013 choose what Remote Public Folder Mailbox to hand out to clients if more than one is configured in the RemotePublicFolderMailboxes variable? Is it random, round robin, looking at availability?

A: By default Exchange looks at the hash of the user calling into Autodiscover and will pick an entry from the array of mailboxes in RemotePublicFolderMailboxes or use the default public folder mailbox value if it is explicitly set on the mailbox. There is no logic based on user location versus PFDB location or anything of such nature.

Q: Will Exchange 2013 check to make sure the server holding a PF discovery mailbox is up and reachable before a client attempts to retrieve its connection settings via Autodiscover?

A: No, there is no availability check to ensure the legacy server is available before the PF discovery mailbox is given to a client to look up via Autodiscover.

Q: How many legacy public folder databases do I need accessible?

A: Public folder scalability guidance for Exchange 2007 and Exchange 2010 recommended no more than 10,000 active users connecting to a single PFDB. Based on that guidance, then at least one PFDB per 10,000 active users should be accessible. If you have 50,000 users in your organization then a conservative number would be to have no less than 5 public folder databases.

Note: This is a starting point. Your environment may vary and as a result require more or even less PF public folder databases as you monitor your system performance, user concurrency, and user client experience in your legacy environment.

Q: How many PF discovery mailboxes do I need?

A: At this time we are suggesting one per PFDB to be accessed.

Q: How do I control what particular PFDB the user connects to first?

A: For environments with geographically disperse locations it may be beneficial to ensure users connect to a PFDB close to their home location on a well performing network link path. You can make this happen by defining the default public folder database on the user’s Exchange 2013 mailbox database and locate users with similar geographical needs in the same Exchange 2013 mailbox database.

The commands are slightly different depending on if you are setting an Exchange 2010 or an Exchange 2007 public folder database as the default for an Exchange 2013 mailbox database. The command will tell you the ‘PublicFolderDatabase’ parameter has been deprecated, but it does do what it is supposed to do for coexistence purposes.

Using an Exchange 2007 Public Folder Database

Set-MailboxDatabase <2013DatabaseName> -PublicFolderDatabase <2007ServerName>\<Storage GroupName>\<PFDatabaseName>

pf12

Using an Exchange 2010 Public Folder Database

Set-MailboxDatabase <2013DatabaseName> -PublicFolderDatabase <2010PFDatabaseName>

pf13

Q: For Exchanage 2010do I really need to install CAS on every Mailbox server with a PFDB to be accessed and create a new mailbox database?

A: At this time, yes, but we are evaluating a few other options to help improve and possibly streamline the coexistence configuration in the future. If we are able to streamline this process in the future we will be sure to update you. Remember, you do not need to add the server to your load balancer pool simply because CAS has been installed. The server should not see the volume of client traffic other CAS in the AD site experience.

Summary

After implementing this configuration you will have a more robust and predictable legacy Public Folder connectivity experience with Exchange 2013 Cumulative Update 7 and beyond by making your legacy Public Folder infrastructure discoverable via Autodiscover by your Outlook (for Windows) clients. We look forward to your comments and questions below. Be on the lookout soon for another article that will go into detail on deployment recommendations for Exchange 2013 public folders themselves.

Brian Day
Senior Program Manager
Office 365 Customer Experience

Introducing the IMAP Migration Troubleshooter

0
0

Situation

If transitioning your organization from a non-Exchange system such as Google or Lotus Notes to Office 365, you typically need to follow the IMAP migration path. As diagnosing and remediating any issues you might run into during such a migration can be difficult for people unfamiliar with the matter, we worked with our Support teams to provide some guidance for exactly such cases and packaged it for you in a wizard-like package.

Introducing the IMAP Migration Troubleshooter

IMAP Migrations provide organizations with an effective way to move email from any environment that supports the IMAP protocol. This is usually used for any non-Exchange source email system. If Exchange is the source you would most likely opt for the Staged, Cutover, or Hybrid migration path.

To help you with troubleshooting, we have released a new guided walkthrough (GWT) for IMAP migrations. This link will soon be surfaced in various KB articles as well as the support ticket creation process via the Office 365 portal.

image

The intent of this GWT is to take a tenant administrator step-by-step though the common troubleshooting tasks to solve their IMAP migration issues. We took the most common migration failure scenarios and put them into this easy to follow guide.

http://aka.ms/IMAPMigrationGWT

Feedback

If you see any issues with the IMAP migration troubleshooter or think there are scenarios that should be added or improved, please let us know at MigrationGWT@microsoft.com.

Special thanks to all that contributed to the creation of the GWT: Kevyn Pietsch, Nagesh Mahadev, Timothy Heeney, Shawn Sullivan, and to the writers and KE team for assisting with collaboration and coordination efforts: Charlotte Raymundo and Sharon Shen.

If you are looking for assistance on any troubleshooting Hybrid migrations, see the Exchange Online Migration Guided Walk Through (GWT).

Kevyn Pietsch, Nagesh Mahadev, Timothy Heeney

November Exchange Releases delayed until December

0
0

We know that many of you are anxiously awaiting the release of our quarterly Exchange updates planned for November. Earlier today the Exchange Team decided to hold the release of these packages until December. We made this decision to provide more time to resolve a late breaking issue in the Installer package used with Exchange Server 2013. We have discovered that in some instances, OWA files will be corrupted by installation of a Security Update. The issue is resolved by executing an MSI repair operation before a Security Update is installed. We do not believe this is acceptable behavior and is unfortunately something that customers might only discover after they install a Security Update.

As of this blog announcement, we believe the installer defect is limited to Exchange Server 2013. However, we are also evaluating previous versions of Exchange Server and are delaying the planned 2007 and 2010 releases as well to complete that investigation.

The Exchange team remains committed to ensuring that our customers have the best possible experience and because of that we have opted to delay the November releases to address this issue.

Exchange Team


Exchange releases: December 2014

0
0

Editor's Note: Updates added below for important information related to Exchange Server 2010 SP3 Update Rollup 8.

The Exchange team is announcing today a number of releases. Today’s releases include updates for Exchange Server 2013, 2010, and 2007. The following packages are now available on the Microsoft download center.

These releases represent the latest set of fixes available for each of their respective products. The releases include fixes for customer reported issues and minor feature improvements. The cumulative updates and rollup updates for each product version contain important updates for recently introduced Russian time zones, as well as fixes for the security issues identified in MS14-075. Also available for release today are MS14-075 Security Updates for Exchange Server 2013 Service Pack 1 and Exchange Server 2013 Cumulative Update 6.

Exchange Server 2013 Cumulative Update 7 includes updates which make migrating to Exchange Server 2013 easier. These include:

  • Support for Public Folder Hierarchies in Exchange Server 2013 which contain 250,000 public folders
  • Improved support for OAB distribution in large Exchange Server 2013 environments

Customers with Public Folders deployed in an environment where multiple Exchange versions co-exist will want to read Brian Day’s post for additional information.

Cumulative Update 7 includes minor improvements in the area of backup. We encourage all customers who backup their Exchange databases to upgrade to Cumulative Update 7 as soon as possible and complete a full backup once the upgrade has been completed. These improvements remove potential challenges restoring a previously backed up database.

For the latest information and product announcements about Exchange 2013, please read What's New in Exchange 2013, Release Notes and Exchange 2013 documentation on TechNet.

Cumulative Update 7 includes Exchange-related updates to Active Directory schema and configuration. For information on extending schema and configuring Active Directory, please review Prepare Active Directory and Domains in Exchange 2013 documentation.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current Cumulative Update release.

Update 12/12/2014:

Exchange Server 2010 SP3 Update Rollup 8 has been re-released to the Microsoft download center resolving a regression discovered in the initial release. The update RU8 package corrects the issue which impacted users connecting to Exchange from Outlook. The issue was insulated to the MAPI RPC layer and was able to be isolated to quickly deliver the updated RU8 package. The updated RU8 package is version number 14.03.0224.002 if you need to confirm you have the updated package. The updates for Exchange Server 2013 and 2007 were not impacted by this regression and have not been updated.

Update 12/10/2014:

An issue has been identified in the Exchange Server 2010 SP3 Update Rollup 8. The update has been recalled and is no longer available on the download center pending a new RU8 release. Customers should not proceed with deployments of this update until the new RU8 version is made available. Customers who have already started deployment of RU8 should rollback this update.

The issue impacts the ability of Outlook to connect to Exchange, thus we are taking the action to recall the RU8 to resolve this problem. We will deliver a revised RU8 package as soon as the issue can be isolated, corrected, and validated. We will publish further updates to this blog post regarding RU8.

This issue only impacts the Exchange Server 2010 SP3 RU8 update, the other updates remain valid and customers can continue with deployment of these packages.

The Exchange Team

How to Configure S/MIME in Office 365

0
0

S/MIME in Office 365

S/MIME (Secure/Multipurpose Internet Mail Extensions) is a standard for public key encryption and digital signing of MIME data. Configuring S/MIME in Office 365 is a slightly different procedure than configuring S/MIME on-premises. This blog is for people who want to move from on-premises to Exchange Online and want to continue to use S/MIME. This article will also apply to any Office 365 customers who want to use S/MIME for sending digitally signed and encrypted mails.

Configuring S/MIME will allow users to encrypt and/or digitally sign an email. S/MIME provides the following cryptographic security services for electronic messaging applications: authentication, message integrity, non-repudiation of origin (using digital signatures), privacy, and data security (using encryption). Further, Office 365 also provides the capability for end users to compose, encrypt, decrypt, read, and digitally sign emails between two users in an organization using Outlook, Outlook Web App (OWA) or Exchange ActiveSync (EAS) clients.

Below, we will take you through the configuration steps that you will need to follow to configure S/MIME for Exchange Online Only (Scenario 1), and for Exchange Hybrid (Scenario 2).

Scenario 1: Exchange Online

In this scenario, all the users are hosted on cloud and there is no on-premises Exchange organization.

Requirements

  1. .SST File (Serialized store): The SST file contains all the root and intermediate certificates that are used when validating the S/MIME message in Office 365. The .SST file is created from certificate store explained below.
  2. End user’s certificate for signing and encrypting the message issued from Certificate Authorities(CA) either Windows based CA or Third party CA.

Configuration

Remember that in Exchange Online, only the SST will be used for S/MIME certificate validation.

1. Create a .SST file for the Trusted Root CA / Intermediate CA of the certificate issued to the users:
You can use either Certificate MMC or PowerShell cmdlets to export SST file. I am using Certificate console to export the .SST here:

Open certmgr.msc snap-in, expand Trusted Root Certificate Authorities > Certificates> select the CA Certificates which issued the certificates to end users for S/MIME and right click > All Tasks> Export…

image

Note: There may be some Intermediate CA’s. You can move them to Trust Root CA folder and select them (including the Trusted CA certificates) and export it all in one .SST file.

2. Select Microsoft Serialized Certificate Store(.SST) > Click Next and save the SST file:

image

3. Upload .SST to office 365 server:
Update the SST on office 365 exchange server by executing the following commands using remote PowerShell.

$sst = Get-Content <sst file copied from the box>.sst -Encoding Byte

(Example: $sst = Get-Content TenantRoot.sst -Encoding Byte)

Set-SmimeConfig -SMIMECertificateIssuingCA $sst

4. Publish user’s certificate to the Exchange Online GAL (Global Address List) using Outlook. If not published, users will not be able to exchange S/MIME encrypted messages.

Note: To publish the certificate, the user must first have the certificate installed on their local machine.

  • On the File menu in Outlook 2013, click Options.
  • On the Outlook Options window, click Trust Center, click Trust Center Settings..., and then click Email Security.
  • In the Trust Center window, click Settings… (Here, you need to choose certificate issued by the CA you are going to use for S/MIME).
  • In the Change Security Settings window, type the Security Settings Name (you can name it anything) and choose Signing and Encryption certificate. Select the appropriate certificate assigned in previous steps, leave the Algorithm default and click OK.

image

  • Once the information is selected, you will notice the Default Setting is populated with Security Settings Name. Now you can click the Publish to GAL button. To publish the certificate to the GAL, click OK.

image

5. To confirm the certificate is published in AAD (Azure Active Directory), connect to Exchange Online using remote PowerShell and run following command. Check to make sure that the UserSMimeCertificate attribute is populated with the certificate information. If not, return to step 4.

Get-Mailbox <user> | FL or FT *user*

image

6. Once you confirm the end user has the certificate on their machine under certificates> personal store and also published in AAD, the users can use Outlook, OWA, or EAS to send and receive S/MIME messages.

Note: Make sure you check S/MIME Supported Clients section below before exchanging S/MIME messages.

Scenario 2: Exchange Hybrid

In Exchange Hybrid topology, some mailboxes are homed on-premises and some mailboxes are homed online, and users share the same e-mail address space.

Requirements:

  1. Public Key Infrastructure (PKI). You can use Active Directory Certificate Services to issue certificates to the end users.
  2. SST File (Microsoft serialized certificate store). Tenant admins will have to configure their tenant in O365 with signing certificates issuing CA & Intermediate certs information. They will have to produce a SST file, which is a collection of certificates, and then later import it into O365 to validate S/MIME.
  3. DirSync. You will need version 6593.0012 or higher of the DirSync tool. DirSync is used to synchronize the Active Directory user object to the Azure AD, so that cloud users can also see the certificate information of recipients when performing S/MIME (encrypt) operation.

You can verify the DirSync version following these steps:

  • Open Control Panel.
  • Click Programs.
  • Click Programs and Features.
  • Click Windows Azure Active Directory Sync tool.
  • Check the version as the screenshot below:

image

Configuration:

1. Public Key Infrastructure (PKI)

The users in your organization must have certificates issued for digitally signing and encryption purposes. You can either install Certificate Authority On-premises to issue certificates to the end users or have third party certificates issued to them. There are two attributes in a user object where certificate information stored: 1) UserCertificate and 2) UserSMimeCertificate.

UserCertificate is populated automatically in on-premises deployment with a Windows root CA. This is populated at the time the user enrolls for a user certificate. This could be done manually for each user, or an administrator can set a GPO to automatically enroll all users.

Certificates are stored in the userSMimeCertificate attribute when an Outlook client publishes a certificate to GAL. Outlook 2010 and above will populate both attributes with the same certificate http://support.microsoft.com/kb/2840546. But Outlook 2007 and below will not. http://support.microsoft.com/kb/822504

2. When setting a SST file, remember in Exchange online, only the SST will be used for S/MIME certificate validation.

Create a SST file for the Trusted Root CA / Intermediate CA of the certificate issued to the users:
You can use either Certificate MMC or PowerShell cmdlets to export the SST file. I am using the Certificate console to export the SST here:

Open certmgr.msc snap-in, Expand Trusted Root Certificate Authorities > Certificates> select the CA Certificates which issued the certificates to end users for S/MIME, and right click > All Tasks > Export

image

Note: There may be some Intermediate CA. If there are, move them to Trust Root CA folder and select them, including the Trusted CA certificates, and export them all in one .SST file.

Select SST > Click Next and save the SST file:

image

Upload .SST to Office 365 server:
Update the SST on Office 365 Exchange server by running the commands below using remote PowerShell:

$sst = Get-Content <sst file copied from the box>.sst -Encoding Byte

(Example: $sst = Get-Content TenantRoot.sst -Encoding Byte)

Set-SmimeConfig -SMIMECertificateIssuingCA $sst

3. If end users are issued third party certificates, they can publish the certificate information to the GAL by following these steps:

Note: To publish the certificate, the users must first have the certificate installed on their local machine.

  • On the File menu in Outlook 2013, click Options.
  • On the Outlook Options window, click Trust Center, click Trust Center Settings..., then Email Security.
  • On Trust Center window, click Settings… (Here, you need to choose which certificate you are going to use for S/MIME).
  • In the Change Security Settings window, type the Security Settings Name (you can name it anything), Choose Signing and Encryption certificate, select the appropriate certificate assigned in previous steps, leave the Algorithm default, and click OK.

image

  • Once the information is selected, you will notice the Default Setting is populated with Security Settings Name. Now you can click the Publish to GAL button. To publish the certificate to the GAL, click OK.

image

  • To confirm that the certificate is published in AAD (Azure Active Directory), connect to Exchange Online using remote PowerShell and run the following command. Check to see if the UserSMimeCertificate attribute is populated with the certificate information. If not, return to step 4.

Get-Mailbox <user> | FL or FT *user*

image

If Windows Certificate Authority is used, then the CA will publish the certificate information into the user object. In both cases, you need to use DirSync to replicate the on-premises Active Directory information to the cloud so that cloud users can exchange S/MIME messages.

4. After the above steps, your end users can use Outlook, OWA, or EAS to send and receive S/MIME messages.

Note: Make sure you check S/MIME Supported Clients section belowbefore exchanging S/MIME messages.

S/MIME Supported Clients

All the client machines should have the PKI issued user certificate installed under (whichever is applicable)
Certificates - Current User
- Personal - Certificates
­­­­- Trusted Root Certification Authorities - Certificates
- Intermediate Certification Authorities - Certificates

If the PKI issued certificate is not available, users will not be able to send digitally signed messages or decrypt the S/MIME encrypted messages.

Outlook Web App:

  • OWA for S/MIME - Supported only on Windows Vista or greater with browser IE9 and above. Not supported on other browsers or on MOWA (Mobile for Outlook Web Access).
  • Third party certificates aren’t supported for OWA S/MIME; only Windows Certificate Authority issued certificates are supported.
  • To use Outlook Web Access with the S/MIME control, the client system on which the user is running Internet Explorer must have Outlook Web Access with the S/MIME control installed. S/MIME functionality in Outlook Web Access cannot be used on a system that does not have Outlook Web Access with the S/MIME control installed.

SMIME control in OWA requires .Net 4.5. All users accessing their mailboxes using OWA should install this on their machine. .Net 4.5 can be installed from Microsoft Downloads page.

Outlook

  • Outlook 2010 and above are supported.

EAS Clients

  • Windows phone 8.1 is a supported EAS client for S/MIME. To learn how to install a certificate on Windows Phone 8.1, see Installing digital certificates.
  • For any other devices, you need to check with the device vendors.

FAQ

1. Do both of these user object attributes (UserSMIMECertificate and UserCertificate) need to be populated with certificate information?

Either, or both.

2. Do we support S/MIME for Cross Org/Cross Tenant?

Cross Org/Cross Tenant S/MIME is not supported in Outlook Web App and EAS (Exchange Active Sync)

With Outlook, it is a supported scenario. So, when we are looking for certificates for recipients, we check in all the Address Books.  This includes the Global Address Book (GAL), the Contact Address Book (contacts folder), as well as any other address books (which includes LDAP address books). As long as we can find an entry in an address book for the recipient and it contains a certificate that we trust, then we can use it and send S/MIME mail.

Note: Certificate in Exchange online GAL (Contact) currently not supported.

3. When I select Encrypt mail and click on Send button in Outlook/OWA, I get error saying that the sender does not have a certificate. Why?

In the example below, David is a sender. He was trying to send an S/MIME encrypted email message to a couple of recipients who have certificates published in the Active Directory, but David himself doesn’t have a certificate. When he clicks Send, he gets the below error.

image

So, when sending an S/MIME encrypted message, we always check the sender’s certificate so that the message is encrypted such that the sender himself can see it from his Outlook ‘sent items’ folder.

References

Understanding S/MIME

Special thanks to Frank Brown, Mike Brown, Timothy Heeney, Tariq Sharif, Vikas Malhotra and Eduardo Melo for reviewing this post!

Suresh Kumar

Concerning Trends Discovered During Several Critical Escalations

0
0

Over the last several months, I have been involved in several critical customer escalations (what we refer to as critsits) for Exchange 2010 and Exchange 2013. As a result of my involvement, I have noticed several common themes and trends. The intent of this blog post is to describe some of these common issues and problems, and hopefully this post will lead you to come to the same conclusion that I have – that many of these issues could have been avoided by taking sensible, proactive steps.

Software Patching

By far, the most common issue was that almost every customer was running out-of-date software. This included OS patches, Exchange patches, Outlook client patches, drivers, and firmware. One might think that being out-of-date is not such a bad thing, but in almost every case, the customer was experiencing known issues that were resolved in current releases. Maintaining currency also ensures an environment is protected from known security defects. In addition, as the software version ages, it eventually goes out of support (e.g., Exchange Server 2010 Service Pack 2).

Software patching is not simply an issue for Microsoft software. You must also ensure that all inter-dependent solutions (e.g., Blackberry Enterprise Server, backup software, etc.) are kept up-to-date for a specific release as this ensures optimal reliability and compatibility.

Microsoft recommends adopting a software update strategy that ensures all software follows N to N-1 policy, where N is a service pack, update rollup, cumulative update, maintenance release, or whatever terminology is used by the software vendor. We strongly recommend that our customers also adopt a similar strategy with respect to hardware firmware and drivers ensuring that network cards, BIOS, and storage controllers/interfaces are kept up to date.

Customers must also follow the software vendor’s Software Lifecycle and appropriately plan on upgrading to a supported version in the event that support for a specific version is about to expire or is already out of support.

For Exchange 2010, this means having all servers deployed with Service Pack 3 and either Rollup 7 or Rollup 8 (at the time of this writing). For Exchange 2013, this means having all servers deployed with Cumulative Update 6 or Cumulative Update 7 (at the time of this writing).

For environments that have a hybrid configuration with Office 365, the servers participating in the hybrid configuration must be running the latest version (e.g., Exchange 2010 SP3 RU8 or Exchange 2013 CU7) or the prior version (e.g., Exchange 2010 SP3 RU7 or Exchange 2013 CU6) in order to maintain and ensure compatibility with Office 365. There are some required dependencies for hybrid deployments, so it’s even more critical you keep your software up to date if you choose to go hybrid.

Change Control

Change control is a critical process that is used to ensure an environment remains healthy. Change control enables you to build a process by which you can identify, approve, and reject proposed changes. It also provides a means by which you can develop a historical accounting of changes that occur. Often times I find that customers only leverage a change control process for “big ticket” items, and forego the change control process for what are deemed as “simple changes.”

In addition to building a change control process, it is also critical to ensure that all proposed changes are vetted in a lab environment that closely mirrors production, and includes any 3rdparty applications you have integrated (the number of times I have seen Exchange get updated and heard the integrated app has failed is non-zero, to use a developer’s phrase).

While lab environments provide a great means to validate the functionality of a proposed change, they often do not provide a view on the scalability impact of a change. One way to address this is to leverage a “slice in production” where a change is deployed to a subset of the user population. This subset of the user population can be isolated using a variety of means, depending on the technology (e.g., dedicated forests, dedicated hardware, etc.). Within Office 365, we use slices in productions a variety of different ways; for example, we leverage them to test (or what we call dogfood) new functionality prior to customer release and we use it as a First Release mechanism so that customers can experience new functionality prior to worldwide deployment.

If you can’t build a scale impact lab, you should at a minimum build an environment that includes all of the component pieces you have in place, and make sure you keep it updated so you can validate changes within your core usage scenarios.

The other common theme I saw is bundling multiple changes together in a single change control request. While bundling multiple changes together may seem innocuous, when you are troubleshooting an issue, the last thing you want to do is make multiple changes. First, if the issue gets resolved, you do not know which particular change resolved the issue. Second, it is entirely possible the changes may exacerbate the current issue.

Complexity

Failure happens. There is no technology that can change that fact. Disks, servers, racks, network appliances, cables, power substations, pumps, generators, operating systems, applications, drivers, and other services – there is simply no part of an IT service that is not subject to failure.

This is why we use built-in redundancy to mitigate failures. Where one entity is likely to fail, two or more entities are used. This pattern can be observed in Web server arrays, disk arrays, front-end and back-end pools, and the like. But redundancy can be prohibitively expensive (as a simple multiplication of cost). For example, the cost and complexity of the SAN-based storage system that was at the heart of Exchange until the 2007 release, drove the Exchange Team to evolve Exchange to integrate key elements of storage directly into its architecture. Every SAN system and every disk will ultimately fail, and implementing a highly-redundant system using SAN technology is cost-prohibitive, so Exchange evolved from requiring expensive, scaled-up, high-performance storage systems, to being optimized for commodity scaled-out servers with commodity low-performance SAS/SATA drives in a JBOD configuration with commodity disk controllers. This architecture enables Exchange to be resilient to any storage failure.

By building a replication architecture into Exchange and optimizing Exchange for commodity hardware, failure modes are predictable from a hardware perspective, and that redundancy can removed from other hardware layers, as well. Redundant NICs, redundant power supplies, etc., can also be removed from the server hardware. Whether it is a disk, a controller, or a motherboard that fails, the end result is the same: another database copy is activated on another server.

The more complex the hardware or software architecture, the more unpredictable failure events can be. Managing failure at scale requires making recovery predictable, which drives the necessity for predictable failure modes. Examples of complex redundancy are active/passive network appliance pairs, aggregation points on a network with complex routing configurations, network teaming, RAID, multiple fiber pathways, and so forth.

Removing complex redundancy seems counter-intuitive – how can removing hardware redundancy increase availability? Moving away from complex redundancy models to a software-based redundancy model creates a predictable failure mode.

Several of my critsit escalations involved customers with complex architectures where components within the architecture were part of the systemic issue trying to be resolved:

  1. Load balancers were not configured to use round robin or least connection management for Exchange 2013. Customers that did implement least connection management, did not have the “slow start” feature enabled. Slow start ensures that when a server is returned to a load-balanced pool, it is not immediately flooded with connections. Instead, the connections are slowly ramped up on that server. If your load balancer does not provide a slow start function for least connection management, we strongly recommend using round robin connection management.
  2. Hypervisor hosts were not configured in accordance with vendor recommendations for large socket/pCPU machines.
  3. Firewalls between Exchange servers, Active Directory servers, or Lync servers. As discussed in Exchange, Firewalls, and Support…Oh, my!, Microsoft does not support configurations when Exchange servers have network port restrictions that interfere with communicating with other Exchange servers, Active Directory servers, or Lync servers.
  4. Ensuring the correct file-based anti-virus exclusions are in place.
  5. Deploying asymmetric designs in a “failover datacenter.” In all instances, there were fewer servers in the failover datacenter than the primary datacenter. The logic used in designing these architectures was that the failover datacenter would only be used during maintenance activities or during catastrophic events. The fundamental flaw in this logic is that it assumes there will not be 100% user activity. As a result, users are affected by higher response latencies, slower mail delivery, and other performance issues when the failover datacenter is activated.
  6. SSL offloading (another supported, but rarely recommended scenario) was not configured per our guidance.
  7. Storage area networks were not designed to deliver the capacity and IO requirements necessary to support the messaging environment. We have seen customers invest in tiered storage to help Exchange and other applications; however, due to the way the Extensible Storage Engine and the Managed Store work and the random nature of the requests being made, tiered storage is not beneficial for Exchange. The IO is simply not available when needed.

How can the complexity be reduced? For Exchange, we use predictable recovery models (for example, activation of a database copy). Our Preferred Architecture is designed to reduce complexity and deliver a symmetrical design that ensures that the user experience is maintained when failures occur.

Ignoring Recommendations

Another concerning trend I witnessed is that customers repeatedly ignored recommendations from their product vendors. There are many reasons I’ve heard to explain away why a vendor’s advice about configuring or managing their own product was ignored, but it’s rare to see a case where a customer honestly knows more about how a vendor’s product works than does the vendor. If the vendor tells you to configure X or update to version Y, chances are they are telling you for a reason, and you would be wise to follow that advice and not ignore it.

Microsoft’s recommendations are grounded upon data- the data we collect during a support call, the data we collect during a Risk Assessment, and the data we get from you. All of this data is analyzed before recommendations are made. And because we have a lot of customers, the collective learnings we get from you plays a big part.

Deployment Practices

When deploying a new version of software, whether it's Exchange or another product, it's important to follow an appropriate deployment plan. Customers that don't take on the unnecessary risk of running into unexpected issues during the deployment.

Proper planning of an Exchange deployment is imperative. At a minimum, any deployment plan you use should include the following steps:

  1. Identify the business and technical requirements that need to be solved.
  2. You'll need to know your peak usage time(s) and you will collect IO and message profile data during your peak usage time(s).
  3. Design a solution based on the requirements and data collected.
  4. Then, you use the Exchange Server Role Requirements Calculator to model the design based on this collected data and any extrapolations required for your design.
  5. Then, you'll procure the necessary hardware based on the calculator output, design choices, and leverage the advice of your hardware vendor.
  6. Next, you'll configure the hardware according to your design.
  7. Before going into production, you'll validate the storage system with Jetstress (following the recommendations in the Jetstress Field Guide) to verify that your storage configuration can meet the requirements defined in the calculator.
  8. Once the hardware has been validated you can deploy a pilot that mirrors your expected production load.
  9. Be sure to collect performance data and analyze it. Verify that the data matches your theoretical projections. If the pilot requires additional hardware to meet the demands of the user base, optimize the design accordingly.
  10. Deploy the optimized design and start onboarding the remainder of your users.
  11. Continue collecting data and analyzing it, and adjust if changes occur.

The last step is important. Far too often, I see customers implement an architecture and then question why the system is overloaded. The landscape is constantly evolving. Years ago, bring your own device (BYOD) was not an option in many customer environments, whereas, now it is becoming the norm. As a result, your messaging environment is constantly changing – users are adapting to the larger mailbox quotas, the proliferation of devices, the capabilities within the devices, etc. These changes affect your design and can consume more resources. In order to account for this, you must baseline, monitor, and evaluate how the system is performing and make changes, if necessary.

Historical Data

To run a successful service at any scale, you must be able to monitor the solution to not only identify issues as they occur in real-time, but to also proactively predict and trend how the user base or user base activity is growing. Performance, event log and protocol logging data provides two valuable functions:

  1. It allows you to trend and determine how your users’ message profile evolves over time.
  2. When an issue occurs, it allows you to go back in time and see whether there were indicators that were missed.

The data collected can also be used to build intelligent reports that expose the overall health of the environment. These reports can then be shared at monthly service reviews that outline the health and metrics, actions taken within the last month, plans for the next month, issues occurring within the environment and steps being taken to resolve the issues.

If you do not have a monitoring solution capable of collecting and storing historical data, you can still collect the data you need.

  • Exchange 2013 captures performance data automatically and stores it in the Microsoft\Exchange Server \V15\Logging\Diagnostics\DailyPerformanceLogs folder. If you are not running Exchange 2013, you can use Experfwiz to capture the data.
  • Event logs capture all relevant events that Exchange writes natively. Unfortunately, I often see customers configure Event logs to flush after a short period of time (one day). Event logs should collect and retain information for one week at a minimum.
  • Exchange automatically writes a ton of useful information into protocol logs that can tell you how your users and their devices behave. Log Parser Studio 2.2 provides means to interact with this data easily.
  • Message tracking data is stored on Hub Transport servers and/or Mailbox servers and provides a wealth of information on the message flow in an environment.

Summary

As I said at the beginning of this article, many of these customer issues could have been avoided by taking sensible, proactive steps. I hope this article inspires you to investigate how many of these might affect your environments, and more importantly, to take steps to resolve them, before you are my next critsit escalation.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Using an Azure VM as a DAG Witness Server

0
0

I’m happy to announce support for use of an Azure virtual machine as an Exchange 2013 Database Availability Group witness server. Automatic datacenter failover in Exchange 2013 requires three physical sites, but many of our customers with stretched DAGs only have two physical sites deployed today. By enabling the use of Azure as a third physical site, this provides many of our customers with a cost-effective method for improving the overall availability and resiliency of their Exchange deployment.

You can learn more about the deployment and configuration process, as well as learn about our best practices in the TechNet Library article.

It’s important to remember that deployment of production Exchange servers is still unsupported on Azure virtual machines, so it’s not yet possible to stretch a DAG into Azure. This announcement is limited to deployment of a file share witness in the Azure cloud. Also note that this is not related to the “Cloud Witness” feature in the Windows Server Technical Preview. Stay tuned for future announcements about additional support for Azure deployment scenarios.

Jeff Mealiffe
Principal PM Manager
Office 365 Customer Experience

OWA Forms Based Auth Logoff Changes in Exchange 2013 Cumulative Update 8 – And Good News for TMG Customers

0
0

Back at the release of Exchange Server 2013 CU1 we made some necessary changes to the way OWA logoff works. Those changes had the unfortunate side-effect of breaking the way TMG spotted a user’s attempt to logoff, thereby breaking that scenario.

Well, we have some more changes in mind for OWA logoff once again, and we’re taking the opportunity this time to FIX the TMG logoff issues at the same time. A better result all around we are sure you will agree.

And one more thing, this is a heads up, as we’re delivering this change in CU8.

So what are we changing? Well simply put, instead of sending you back to the logon form when you log out, we’re sending you to a new static logoff page, recommending you close your browser.

Why would we want to do that? Well, it means we have a more consistent logoff experience now whether the authentication used is FBA, Basic or Integrated Windows, the message gets presented for all. It also means we decouple log on and logoff, which means each can potentially be changed in some way without impacting the other.

So here’s the old, pre-CU8 way;

When using OWA, and when you click on sign out:

  1. The client initiates logoff with the request to “/owa/logoff.owa”
  2. Client then gets a 302 redirect to “/owa/auth/logon.aspx”

And you’re back at the logon page.

When using ECP, and the user clicks on sign out:

  1. The client initiates logoff with the request to “/ecp/logoff.aspx”
  2. Client gets a 302 redirect to “/owa/logoff.owa”
  3. The client then gets another 302 redirect to “/owa/logon.aspx”

And you’re back at the logon page again.

Now here’s how we’re doing it in CU8 by default.

When using OWA, and when you click on sign out:

  1. Client initiates logoff with the request to “/owa/logoff.owa”
  2. The server sends to client a 302 redirect to the landing page “/owa/auth/signout.aspx”

Now you’re at the new signout.aspx page.

When using ECP and the user clicks on sign out:

  1. Client initiates logoff with the request to “/ecp/logoff.aspx”
  2. Client gets a 302 redirect to “/owa/logoff.owa”
  3. Client gets a 302 redirect to the landing page “/owa/auth/signout.aspx”

And again you’re at the new signout.aspx page.

So now that you understand what we changed and why, why do you care? And why are we telling you now? We expect a large portion of our customers likely don’t need to care too much as the changes will be invisible to you, but some of you may need to (as our KB articles say) ‘consider the following’ scenarios;

You are using TMG and have it configured to watch for logoff.owa to signify a user was logging off. If you have that configuration today it will simply start to work again. That’s great news, isn’t it?

Regardless of TMG, it still might be important to you to know about this if you have any third party applications integrated into Exchange. We know of a few that have come to depend upon the behavior we introduced with CU1, and we know at least one (as they are participants in our TAP which makes them very smart fellows) who has already made the changes they needed to make to accommodate this in preparation for CU8 being publically available.

So what if the third party vendor solution you have wasn’t aware of this change, and once you install CU8 things break? Well, there are two things you can do;

  1. Ask your vendor why, if they develop third party add-in apps for Exchange are they not reading our blog…. J And ask them when they will be fixing their app so it works with your CU8 or later deployment.
  2. You can put in a temporary reversion to the older (CU1 and later) behavior. This change is only supported with CU8 or later, and the ability to make this reversion will potentially be removed from future updates – so don’t get used to using it, and don’t forget CU9 or later will wipe any web.config changes you make.

The legacy logoff mode can be enabled (disabling redirect to signout.aspx) by changing 3 web.config files.

On servers with the Client Access role;

  • %ExchangeInstallPath%\FrontEnd\HttpProxy\OWA\web.config

On servers with the Mailbox Role;

  • %ExchangeInstallPath%\ClientAccess\OWA\web.config
  • %ExchangeInstallPath%\ClientAccess\ECP\web.config

Modify the following line (make sure you make a backup of web.config before you do this):

<add key="LogonSettings.SignOutKind" value="LegacyLogOff" />

To look like this (the !- - and trailing - - ‘s ensure it is treated as a comment line and not acted upon):

<!-- add key="LogonSettings.SignOutKind" value="LegacyLogOff" /-->

AppPools will recycle automatically on the change unless that has been disabled, in which case it will need manually recycling.

If the entry is not present, or the value is “DefaultSignOut” or any other value then logoff ends on signout.aspx page (default).

And don’t forget, the next Cumulative Update will reset this manual modification, so be prepared to do it again if you must after CU9 releases. Ideally though, if the reason you are doing this is to allow some third party app to work, that app should be updated to live with the new behavior.

The final, and perhaps the most important scenario is that this change introduces an install order dependency, something we thankfully have quite rarely, but something you need to pay attention to on this occasion.

Simply put, if a user’s mailbox is on a CU8 Mailbox server but connecting through a CU6 or earlier CAS, they will see an issue at OWA logoff due to this change. What about if you have CU7 CAS you ask? Well we made enough code changes in CU7 that this situation doesn’t crop up. So to put it simply once again, if your CAS is on CU7, or CU8, or if you have only multi-role servers at CU7 or later, no problem, none, not at all.

So, what’s the best way to make sure you won’t hit an issue? Keep up to date of course, as that means all your servers are already at CU7, so you have nothing to worry about.

What if on the other hand you are coming from CU6 or earlier and you expect users might be using OWA during the window in which you plan to install CU8?

Well if you have separate CAS/MBX roles (and if you do… why?) then we recommend you update all the CAS first. Then you can update the Mailbox servers in any order you like. That’s the simplest solution by far.

If, on the other, other hand, you have all multi-role servers (well done you, we like you), and they are CU6 or earlier, then you have three choices;

  • Upgrade them all to CU7 before you begin your CU8 rollout.
  • Accept there may be some funky issues during that upgrade window and simply decide you want to live with it.
  • Do a rolling upgrade and be smart with your load balancing pool so all incoming connections hit only upgraded CU8 CAS. If you don’t have any idea what that means or how to do it, it’s not the option for you. Take the first option.

We hope this gives you what you need to successfully get your servers to CU8 and hope you TMG stalwarts are once again happy and pleased the logoff experience is properly working again.

Greg Taylor
Principal PM Manager
Office 365 Customer Experience

A better way to recover a mailbox

0
0

The process of recovering deleted users or mailboxes in a hybrid or cloud-only organization can be frustrating. When dealing with these scenarios, customers would sometimes end up with multiple mailboxes for a single user, find that some emails are missing, or even lose data associated with other services. Often, they would find those situations difficult to troubleshoot and they would call Microsoft support for help.

For a long time now, Exchange Online has had a capability called "soft delete" that allows a user to recover a mailbox with very little effort. Let’s take a look at how a mailbox recovery should be approached.

Scenario: User Is Accidentally Deleted Along with Their Mailbox

First, you need to know if the deleted user was managed on-premises or in the cloud.

If the user was managed in the cloud:

If the source of authority for the user is in the cloud (meaning they are not sync’d from on-premises Active Directory), you can restore the user from the Admin Portal at http://portal.office.com. Navigate to Users, and select Deleted Users. There you will see the option to restore the user.

image

If user was synchronized from on-premises AD:

If the user account was being synchronized from on-premises you should restore the user on-premises. The mailbox will automatically reconnect.

IMPORTANT NOTE: Recreating the user on-premises will not have the same effect because the Globally Unique Identifier (GUID) used in the recovery process would be different.

The proper way to restore a deleted user is documented at http://support.microsoft.com/kb/2619308. That’s it! There is no need to take any additional actions.

What If These Actions Do Not Work?

There could still be times when "soft recovery" actions will not fix the user's account. For instance, the user may have a corrupt account or the account may have been permanently deleted. Another possibility is that the user is no longer with the company, but the mailbox is used as a job-related mailbox and needs to be available to a new user. 

For these scenarios we have the New-MailboxRestoreRequest cmdlet. This allows you to merge the data from one user or archive mailbox to another user, or you can archive the active mailbox. Unlike the recovery process above (which is the best approach), New-MailboxRestoreRequest allows you to merge the data from a soft-deleted mailbox into an alternate active mailbox or archive mailbox.

Why Is This a Benefit?

Previously, if you could not recover both the user and the mailbox, you would have to perform an unsupported process of hard-deleting a mailbox. This process was unreliable and sometimes caused a ripple effect on other services such as SharePoint and Lync. If the process failed, you were left with very limited options, and ultimately had to call support.

IMPORTANT NOTE: We are in the process of disabling the old method of recovering a mailbox which involved using Get-RemovedMailbox and New-Mailbox –RemovedMailbox. You will soon find that these methods will no longer be available, which is a good reason to get familiar with new options.

What Do I Need To Do To Take Advantage of This New Option?

All you need to do is create a new user with a mailbox and merge the data. The way you create the user with a new mailbox will depend on if you use DirSync or the Microsoft Online Portal to create users.

1. Create the user and Mailbox.

Using DirSync:

  • Create the user and remote mailbox from the on-premises Exchange management tools.
  • Force a directory synchronization.

Not Using DirSync:

2. Run the cmdlet to merge the accounts. This is done from PowerShell connected to Exchange Online.

A) Connect PowerShell to Exchange Online. To do this, see http://technet.microsoft.com/en-us/library/jj984289(v=exchg.150).aspx

B) Run the following Command and retrieve the GUID for the soft-deleted mailbox that you want to restore: Get-Mailbox -SoftDeletedMailbox

C) Run a cmdlet similar to the following to restore the mailbox: New-MailboxRestoreRequest -SourceMailbox <GUID from Step 2B> -TargetMailbox <GUID from Step 1>

NOTE 1:  If the mailbox source and/or target is an archive, use the following switches (-SourceIsArchive and/or -TargetIsArchive)

NOTE 2: The value in Step 2C calls for the account GUIDs, but they can take other values such as an SMTP address or a UPN. The reason we recommend using GUIDs is to reduce the chances that there will be any confusion or conflict between the source and destination.

Are there limitations?

This merge capability does have some limitations. For instance, you cannot merge data from a source mailbox that is active. Let’s say you have a user (Jane) who is still licensed and using her mail. You would be unable to merge her data into Tom’s mailbox with this new approach. This new process is not meant to be used for backup and duplication purposes; this is a recovery tool only.

Another time when this tool will not work is when the mailbox is hard-deleted. If you manually remove a user account in Office 365, and then remove the user from the Recycle Bin, the mailbox would be hard-deleted. This is the potentially damaging scenario that was briefly discussed above. Again, this merge approach is for recovering soft-deleted mailboxes when the normal recovery options are not available to you.

Timothy Heeney

Ask The Perf Guy: New Exchange 2013 Performance Guidance Available On TechNet

0
0

I’m happy to announce that the Exchange performance virtual team within Microsoft Support has produced some fantastic new performance guidance content for Exchange 2013 users, and it is now available on TechNet at http://aka.ms/Ex2013PerfContent. This is a compilation of resources that we’ve previously published, guidance presented at Microsoft events since the release of Exchange 2013, and things that we have learned through the process of providing support to customers deploying the product.

Clearly this is just a starting point, and we will be updating this content on an ongoing basis as we continue to learn more about how Exchange works in your deployments. Let us know what you think!

Jeff Mealiffe
Principal PM Manager
Office 365 Customer Experience


Get the inside scoop on Exchange at Microsoft Ignite

0
0

This morning we published the first look at the Ignite session catalog providing you a better view of what to expect at Ignite. Ignite brings together the Exchange Conference with TechEd and other Microsoft technology events. By attending Ignite, you’ll have access to a broad set of content on Microsoft’s technologies, including detailed content from the teams that build the products and the experts who deploy and use the technologies.

The initial Ignite session catalog includes over 50 sessions on Exchange and Outlook. We will continue adding more sessions leading up to the event. Use this pre-filtered search link to begin reviewing the content we are planning for Ignite. In addition to first look at the sessions, we are sharing an expanded view of the featured speakers you can expect to find at Ignite.

Veterans of past MEC events, Greg Taylor and Jeff Mealiffe, took some time to talk to us about what to expect at Ignite. Check this out as they coin the new Ignite descriptor - #BREEP!

Mark your calendars for an #IgniteJam on Twitter

Join us on February 3rd at 9:00 am PT, when we’ll have the whole event team and a few speakers ready to chat with you on Twitter. We’ll be ready to answer your questions about the event and hear what you’re excited about in terms of community experiences and things to do in Chicago. Add the event to your calendar with this link.

To participate in the #IgniteJam

  1. Log in to Twitter on Feburay 3rd, at 9:00 a.m. PT. For easier real-time participation, use Twubs and join us at: twubs.com/ignitejam.
  2. Introduce yourself and include the hashtag #ignitejam and tag us at @MS_Ignite.
  3. Watch for questions coming from @MS_Ignite and chime in with your answers and commentary, using the hashtag #ignitejam.

image

So sign-up now and we’ll see you in Chicago!

Single-Click Mailbox Conversion

0
0

Here’s a scenario that might be familiar to you: Sometimes, in support-focused organizations, the email account for external communications with customers is often managed by a single employee. When that employee transitions out of that role, mailbox management responsibility can be shared by multiple employees until a replacement is in place. To provide access to the mailbox for multiple employees, admins typically convert the mailbox to a shared mailbox. Previously, multiple PowerShell commands were required to convert and reconfigure a mailbox. As a result, admins have been asking for a simpler way to convert mailboxes.

Single-Click Conversions

Admins can now convert a cloud-based user mailbox to a shared mailbox with a single click in the Exchange Admin Center (EAC). No more needing to use PowerShell. Similarly, a cloud-based shared mailbox can be converted to user mailbox with a single click, as well. This feature applies to cloud-based mailboxes only. There are no plans for on-premises support at this time.

Currently we support converting only between user mailboxes and shared mailboxes. Mailboxes placed on Hold, and personal archive mailboxes are also supported. There are no plans to support other types of mailboxes at this time.

Convert a User Mailbox to Shared Mailbox

As you can see from Figure 1 below, you simply navigate to the list of user mailboxes in EAC. Select the mailbox you want to change and click Convert. You will be notified when the conversion process has completed.

image

Figure 1 Converting user mailbox to shared mailbox

Convert a Shared Mailbox to User Mailbox

Similarly, as shown in Figure 2 below, you can navigate to shared mailboxes in EAC, select the mailbox you want to convert and click Convert. You will be notified when the conversion process has completed.

image

Figure 2 Converting shared mailbox to user mailbox

Be sure to add a license for the converted user mailbox and assign it a temporary password before using it. A license is required if a shared mailbox exceeds its quota. This applies to any shared mailboxes that have been converted from user mailboxes. To manage mailbox licenses, see Assign or unassign licenses for Office 365 business.

Paul Lo

Configuring Multiple OWA/ECP Virtual Directories on the Exchange 2013 Client Access Server Role

0
0

We have previously published guidance for setting up multiple OWA and ECP virtual directories for Exchange Server 2007 and 2010, and now it is the turn of Exchange Server 2013.

The eagle eyed amongst you may spot some copy and paste from previous blogs on the subject, and well frankly, you’d be correct. The reasons for doing this haven’t changed, only the method by which you do it, so I’m re-using some of the text to avoid wasting electrons.

In short: Microsoft supports using multiple Outlook Web App (OWA) and Exchange Control Panel/Admin Center (ECP) front end virtual directories on a server with the Exchange 2013 Client Access Server role, wheneach is in its own website and is named ‘OWA’ and ‘ECP’.  Each virtual directory must be listening on the standard TCP 443 port for the site.

Note: You must ensure that the Default Web Site is set to All Unassigned for IP, or problems will occur with PowerShell.

There are usually three reasons for choosing this type of configuration. Each of these has slightly different considerations. Here’s what we said for Exchange 2010:

  • Scenario 1: You have one Active Directory site facing the Internet, and are using a reverse proxy (such as Microsoft Forefront Threat Management Gateway or Unified Access Gateway) in front of Exchange.
    You are delegating credentials from that firewall to Exchange, meaning you have to use Basic or Integrated Windows Authentication (IWA) on Client Access Server (CAS) and not Forms-based Authentication (FBA). Your requirement is to provide FBA for all users, internal and external.
  • Scenario 2: You have a non-Internet facing Active Directory site and your requirement is to provide FBA for all users, internal and external. In this configuration, in order to provide external users access to OWA or ECP, a CAS in the Internet facing site must proxy requests to the CAS in the non-Internet facing site – this requires the CAS in the non-internet facing site have IWA enabled, thereby disabling FBA.
  • Scenario 3: You have different users within one organization who require a different OWA experience, such as a different Public/Private File Access or other policy or segmentation features. (This might be a good place to remind you that customizing and branding OWA isn’t something we support in Exchange 2013, so this is NOT a reason you want to consider this type of configuration in case you were wondering)

Now things are actually a bit different with Exchange 2013. I’m calling this out in case you didn’t actually know. You can achieve scenario’s 1 and 2 out of the box with no additional configuration, specifically, no need for an additional web site. Yes, really.

Exchange 2013 ships with Integrated Windows auth enabled on the OWA and ECP virtual directories as well as Forms based auth. So, if you choose NTLM delegation, or KCD, from TMG/UAG to Exchange, it just works. And because OWA is smart enough to determine the machine connecting to it is a browser and not another Exchange Server, the second scenario just works out of the box too. Clients get FBA, but proxy still works. Genius.

With Exchange 2013 there’s one new reason to add to the list, separation of the client facing ECP settings pages, and the Exchange Administration Console (EAC) settings pages. Both of these are served by the ECP virtual directory, which is somewhat confusing I’ll admit. Basically the code behind the ECP virtual directory serves up either the personal ECP pages or the administrators EAC pages based upon on the credentials of the user logging in. Of course this means if you allow access to /ECP from the Internet (which you need to for OWA or Outlook users to go to ECP) you also allow someone with administrative credentials to log into EAC. Some customers don’t like this.

So to summarize, the only reasons for which you might feel the need to create multiple OWA and ECP virtual directories:

  1. Separating admin/user ECP access.
  2. Or scenario number 3 as described earlier, because you have different policies or settings, or authentication requirements.

Now that’s clear, here’s some more statements, warnings and caveats.

Microsoft supports creating additional OWA/ECP virtual directories in a new IIS Web Site with a new IP address, and using those only for client access purposes. By default that new virtual directories will be FBA enabled, and have no internal or external URL values.

You will also need to ensure that whatever name the users will be using to connect to the new FBA enabled OWA/ECP site is present on the installed certificate and that DNS for that name resolves to the correct IP address.

Additional considerations:

  • To avoid issues with DNS registration, the following Hotfix is recommended, if Exchange is installed on Windows 2008 R2
    http://support.microsoft.com/default.aspx?scid=kb;en-US;2386184
  • If one site uses too many resources and it is throttled, the operations in all web sites in this application pool will be throttled.
  • If you ever decide to recycle the Application Pool, all web sites hosted in this Application Pool will cease to work temporarily.

So now you understand the scenarios properly, and understand the constraints and potential issues, there’s just the actual steps you need to use to go through.

Ok, just remembered, there’s one more warning. Only the following set of steps are supported. If you decide to miss a few steps out, change a few to suit yourself, or anyway otherwise generally ignore these and go your own way, you will not be supported. And, just as likely and more importantly, you will break something and your users will be angry, and so will your boss. So just follow the steps, and don’t cross the streams.

Here are the steps, at last.

This process assumes you are setting the default web site to use Integrated Windows auth only, and the new Virtual Directory will be configured for FBA, because that’s supported. You can leave default web site configured for FBA too, by not doing anything to it, but I’m documenting the steps for turning that off, just in case that’s your choice.

  1. Add a secondary IP address to the server– this could be with another NIC, or done just by adding an IP to an existing NIC.
  2. If you added a NIC, in the network properties, uncheck 'register this connection in DNS' in IPv4 for the NIC (this also prevents IPv6 from registering too as it happens).
  3. Create the additional website in IIS in a new root folder (C:\inetpub\OWA_SECONDARY) and bind it to the new IP. Enable for SSL, choose whatever certificate you want to use for this site.

    owa1

  4. Give the local IIS_IUSRS group Read and Execute permission to the C:\inetpub\OWA_SECONDARY folder.
  5. Copy the Default Web Site root folder contents in its entirety including any subfolders to the new site root folder (i.e. copy %SystemDrive%\inetpub\wwwroot\ contents to C:\inetpub\OWA_SECONDARY).
  6. Create new OWA and ECP subfolders in your new web site’s root folder (C:\inetpub\OWA_SECONDARY\OWA, C:\inetpub\OWA_SECONDARY\ECP).
  7. Copy the entire contents of the Default Web Site OWA and ECP folders including any subfolders to the new subfolders for new web site. (Copied from /…/FrontEnd/HttpProxy).
  8. Run the following (substituting <Server> for the server hosting the CAS role);
    1. New-OwaVirtualDirectory -Server <Server> -Role ClientAccess -WebSiteName OWA_SECONDARY -Path "C:\inetpub\OWA_SECONDARY\OWA"
    2. New-EcpVirtualDirectory -Server <Server> -Role ClientAccess -WebSiteName OWA_SECONDARY -Path "C:\inetpub\OWA_SECONDARY\ECP"

      owa2

  9. Run the following to set the default site to IWA only (this is optional, but provided in case you want to do this);
    1. Set-OwaVirtualDirectory -Identity "<server>\owa (Default Web Site)" -FormsAuthentication $false -WindowsAuthentication $true
    2. Set-EcpVirtualDirectory -Identity "<server>\ecp (Default Web Site)" -FormsAuthentication $false -WindowsAuthentication $true
  10. Perform an IISReset.
  11. Test! Really, make sure you do.

The final thing to understand is what you need to do when you apply a Cumulative Update (CU) to any server you have made these changes to. The CU install will NOT properly update the files in the secondary OWA or ECP web site for you, nor will the secondary site work correctly. It’s not just a resource folder/file version issue and just updating the files in directory is not going to do it, there’s more to it.

The only supported solution here is to delete the secondary Vdirs and Web Site and re-do all the steps. So, make sure you have noted any non-default settings you had on the site, then delete the Vdirs, delete the web site (don’t forget to do this), delete any content in the folders, and start again at step 3 in the list above. Re-create the web site, re-create the Vdirs, copy the latest files and re-apply any custom configuration or settings you previously applied. Don’t skip any steps or take any shortcuts. Script it (you can even script the deletion/creation of a web site), run that script after you install the CU, and ensure you do this after each and every CU.

Once you have done that you should be good to go.

We hope this helps you understand the configuration a bit better now should you choose to go down this route and please post back if you have any questions or comments.

Greg Taylor
Principal PM Manager
Office 365 Customer Experience

Considering updating your Domain functional level from Windows 2003? Read this!

0
0

Now that Windows Server 2003 end of life (July 14th, 2015) is on the horizon, many customers are updating their Active Directory (AD) Domain Controllers (DC) from 2003. The first item to consider is which Windows Server Operating System (OS) you will be moving to for your DC’s. There are several options to consider today: 2008, 2008R2, 2012, or 2012R2 operating systems. However, no matter which newer OS you move your DC’s to, coming from 2003, the krbtgt account will reset its’ password when you update the Domain Functional Level (DFL), which is the concern that could break Exchange.

Note: If your DFL is already set to 2008 or higher, then you do not need to worry about this article.

It is a good idea to know that during the process of raising the Domain Functional Level (DFL) of your Active Directory structure from 2003, the krbtgt account password gets changed. This password replication is a separate change within AD and occurs after the DFL has been raised. This change should have no impact on any applications that depend on Active Directory, but sometimes it does cause applications to stop authenticating, one of them being Exchange.

Since raising functional levels is an irreversible operation (in many situations, but not always anymore), it should be planned with care and only after having verified that it will not impact any applications that rely heavily on Directory Services. Mostly any in-house written and/or third party products are the main concern. A lab environment for testing the applications would be the best option, which is why we recommend a lab for Exchange.

After you’ve made the DFL change from 2003, then you’ll want to watch for any Event ID 14 or Event ID 10 errors in the System Event Viewer on the DC’s.

If you do see these events appear after the DFL has been raised, then there are a couple of options available to resolve the authentication error.

Option 1: Restart the Kerberos Key Distribution Center service on all DC’s (short impact, service restart)

  • Command line option steps:
    • SC \\ComputerName Stop kdc
    • SC \\ComputerName Start kdc
  • Active Directory PowerShell module steps:
    • $DC=Get-ADDomainController
    • Get-Service KDC –ComputerName $DC | Restart-Service
  • GUI steps:
    • Open the Services mmc (services.msc) on the DC’s
    • Select the Kerberos Key Distribution Center service and click the restart button

image

Option 2: Restart all DC’s in the Forest (greatest impact, restarting of servers could take time)

  • Manually log into each DC and restart them all, OR
  • Within the Active Directory PowerShell module:
    • $DC=Get-ADDomainController
    • Restart-Computer $DC

Why is this important?

While there should be zero impact to applications, the fact that the krbtgt account password does get reset and it could impact the speed in which Exchange (or other applications) waits for that accounts’ password to be updated via the normal AD replication process. We are recommending that you know how to perform the given resolution steps and that you do this change during a change notification process.

Why does this happen only during the 2003 DFL change?

The underlying issue is due to the addition of the AES hashes (128 and 256) introduced. The changes only add the AES hashes during the one DFL change from 2003 to any higher level (’08, ‘08R2, ’12, ‘12R2) domain functional level. The potential to implement other newer/updated encryption types in future OS versions does exist and we once again could run into this issue. 

Knowing is half the battle. It is very unlikely that you’ll run into this issue, but now that you know how to solve it and be prepared IF you run into it, you now have easy and quick solutions at the ready. Plan your change, move up to the newer version OS/AD functional levels, enjoy the new features that are available today, and don’t let this change break Exchange.

Mike O'Neill

Exchange 2013 and Exchange 2010 Coexistence with Kerberos Authentication

0
0

In April 2011, I documented our recommendation around utilizing Kerberos authentication for MAPI clients to address scalability limits with NTLM authentication. The solution leverages deploying an Alternate Service Account (ASA) credential so that domain-joined and domain-connected Outlook clients, as well as other MAPI clients, can leverage Kerberos authentication.

Recently, we published guidance on how to enable Kerberos Authentication for Exchange 2013 MAPI clients. While this guidance explains the necessary steps to deploy the ASA credential to Exchange 2013, it does not describe what steps you must take to coexist in Exchange 2010 environment. There are certain steps you must take in order to deploy Kerberos authentication for Exchange 2013 and coexist with Exchange 2010.

As with all configuration changes, we recommend you thoroughly test this in a lab environment that mirrors your production environment.

Step 1 – Deploy Outlook Updates

In order to ensure an Exchange 2013 mailbox utilizing Kerberos authentication can connect via the Outlook client to legacy Public Folders and shared mailboxes hosted on Exchange 2010, the Outlook client must be running the following minimum versions:

Until you install these Outlook updates, you must not attempt to enable Kerberos authentication within your messaging environment when coexisting with Exchange 2013 and Exchange 2010, otherwise your users will see continuous authentication dialog prompts.

Step 2 – Create a New Alternate Service Account Credential

The RollAlternateserviceAccountCredential.ps1 script cannot deserialize objects and pass them between servers that are different versions. This means the script cannot be used to copy the credentials from an Exchange 2010 server or push the credentials to an Exchange 2010 server. As a result, Exchange 2013 and Exchange 2010 cannot share the same Alternate Service Account (ASA) credential.

The Exchange 2013 ASA has the same requirements that were established with Exchange 2010. Specifically, all computers within the Client Access server array must share the same service account. In addition, any Client Access servers that participate in an unbound namespace or may be activated as part of a datacenter switchover must also share the same service account. In general, it’s sufficient to have a single account per forest, but knowing that 2010 and 2013 can’t share the same ASA, this should lead you to conclude you need one per version, per forest.

You can create a computer account or a user account for the alternate service account. Because a computer account doesn’t allow interactive logon, it may have simpler security policies than a user account and is therefore the preferred solution for the ASA credential. If you create a computer account, the password doesn't actually expire, but we still recommend that you update the password periodically. The local group policy can specify a maximum account age for computer accounts and in some customer environments scripts are utilized on a scheduled basis to periodically delete computer accounts that don’t meet current policies. To ensure that your computer accounts aren't deleted for not meeting local policy, update the password for computer accounts periodically. Your local security policy will determine when the password must be changed.

Step 3 – Remove HTTP Service Principal Names from Exchange 2010 ASA

At this point you will be required to schedule an outage for your user population. If you do not, internal users may experience authentication dialogs while attempting to connect to HTTP resources (e.g., Autodiscover, OAB downloads) within Outlook.

If you followed our guidance for Exchange 2010, you have at least the following Service Principal Name (SPN) records associated with the Exchange 2010 ASA:

  • http/mail.corp.contoso.com
  • http/autod.corp.contoso.com
  • exchangeMDB/outlook.corp.contoso.com
  • exchangeRFR/outlook.corp.contoso.com
  • exchangeAB/outlook.corp.contoso.com

The Exchange 2010 ASA will continue to retain the exchangeMDB, ExchangeRFR, and ExchangeAB SPN records, but will lose the HTTP records as they will move to the Exchange 2013 ASA.

Use the following steps to remove the HTTP SPNs:

  1. Obtain the HTTP sessions you need to remove from the Exchange 2010 ASA:

    setspn –F <domain\E2010ASA$>

  2. For each HTTP record that needs to be removed, execute the following:

    setspn –D http/<record> <domain\E2010ASA$>

Step 4 – Deploy ASA to Exchange 2013 Client Access Servers

To enable deployment of the ASA credential, the RollAlternateServiceAccountPassword.ps1 has been updated to support Exchange 2013. You need to run this script from CU7 or later. The script is located in the Scripts directory.

For more information on how to use the script, please see the section “Configure and then verify configuration of the ASA credential on each Client Access server” in the article Configuring Kerberos authentication for load-balanced Client Access servers.

Step 5 – Assign the Service Principal Names to the Exchange 2013 ASA

Now that the ASA has been deployed to the Exchange 2013 Client Access servers, you can assign the SPNs using the following command:

setspn –S http/<record> <domain\E2013ASA$>

Step 6 – Enable Kerberos Authentication for Outlook clients

By default, Kerberos authentication is not enabled for internal clients in Exchange 2013.

To enable Kerberos authentication for Outlook Anywhere clients, run the following command against each Exchange 2013 Client Access server:

Get-OutlookAnywhere -server <server> | Set-OutlookAnywhere -InternalClientAuthenticationMethod Negotiate

To enable Kerberos authentication for MAPI over HTTP clients, run the following against each Exchange 2013 Client Access server:

Get-MapiVirtualDirectory -Server <server> | Set-MapiVirtualDirectory -IISAuthenticationMethods Ntlm, Negotiate

Once you confirm the changes have replicated across Active Directory and verified Outlook clients have connected using Kerberos authentication (which you can determine via the HTTPProxy logs on the server and klist on the client), the scheduled outage is effectively over.

Summary

Exchange 2010 and Exchange 2013 coexistence requires each version to have a unique ASA credential in order to support Kerberos authentication with MAPI clients.  In addition, Outlook client updates are required to support all coexistence scenarios.

For more information, including information on how to plan what SPNs you should deploy with your ASA credential, see Configuring Kerberos Authentication for Load-Balanced Client Access Servers.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

Viewing all 607 articles
Browse latest View live




Latest Images