Quantcast
Channel: You Had Me At EHLO…
Viewing all 607 articles
Browse latest View live

Announcing the availability of modern public folder migration to Exchange Online

$
0
0

We are happy to announce the availability of public folder migration from Exchange Server 2013/2016 on-premises to Exchange Online! Many of our customers asked us for this, and the full documentation is now here. To ensure that any version-specific instructions are addressed appropriately, we have two articles to point you to:

While all of the information is located in the documentation, the key requirements are:

  • Exchange Server 2013 CU15 (or later), Exchange Server 2016 CU4 (or later)
  • Exchange on-premises hybrid configured with Exchange Online

If you have any additional questions, let us know in comments below. Enjoy!

Public Folder Migration Team


Released: March 2017 Quarterly Exchange Updates

$
0
0

With this month’s quarterly release we bid a fond farewell to Exchange Server 2007. Support for Exchange Server 2007 expires on 4/11/2017. Update Rollup 23 for Service Pack 3 will be the last update rollup released for the Exchange Server 2007 product. Today we are also releasing the latest set of Cumulative Updates for Exchange Server 2016 and Exchange Server 2013. These releases include fixes to customer reported issues and updated functionality. Exchange Server 2016 Cumulative Update 5 and Exchange Server 2013 Cumulative Update 16 are available on the Microsoft Download Center. Update Rollup 17 for Exchange Server 2010 Service Pack 3 is also now available.

Exchange Server 2013 and 2016 require .Net 4.6.2

As previously announced, Exchange Server 2013 and Exchange Server 2016 now require  .Net 4.6.2 on all supported operating systems.  Customers who are still running .Net 4.5.2 should deploy Cumulative Update 4 or Cumulative Update 15, upgrade the server to .Net 4.6.2 and then deploy either Cumulative Update 5 or Cumulative Update 16.

Arbitration Mailbox Migration

Recently there have been reports of problems with customers migrating mailboxes to Exchange Server 2016. We wanted to take this opportunity to remind everyone that when multiple versions of Exchange co-exist within the organization, we require that all Arbitration Mailboxes be moved to a database mounted on a server running the latest version of Exchange. For more information, please consult the Exchange Server Deployment Assistance on TechNet.

Update on S/MIME Control

One year ago, we released an updated S/MIME Control for OWA. We have received questions from customers requesting clarification on what this release included. As stated previously, the control itself did not change. This was a packaging change necessary to prevent IE from throwing a certificate warning during installation due to SHA-1 deprecation. The Authenticode algorithm used to code sign the control uses a SHA-1 algorithm. SHA-1 ensures compatibility with Vista/Windows Server 2008 and Windows 7/Windows Server 2008R2 code signing. The Authenticode file hash and delivery package are signed with a SHA-2 certificate. Signing the package with a SHA-2 certificate prevents IE from throwing a certificate warning when the package is installed and provides the necessary protection for the entire package.

Latest time zone updates

All of the packages released today include support for time zone updates published by Microsoft through March 2017.

TLS 1.2 Exchange Support Update coming in Cumulative Update 6

We would like to raise awareness of changes planned for the next quarterly update release. We are working to provide updated guidance and capabilities related to Exchange Server’s use of TLS protocols. The June 2017 release will include improved support for TLS in general and TLS 1.2 specifically. These changes will apply to Exchange Server 2016 Cumulative Update 6 and Exchange Server 2013 Cumulative Update 17.

Late Breaking Issues not resolved in Cumulative Update 5

Cumulative Update 5 includes a couple of issues that could not be resolved prior to the product release. The unresolved items we are aware of include the following:

  • When attempting to enable Birthday Calendars in Outlook for the Web, an error occurs and Birthday Calendars are not enabled.
  • When failing over a public folder mailbox to a different server, public folder hierarchy replication may stop until the Microsoft Exchange Service Host is recycled on the new target server.

Fixes for both issues are planned for Cumulative Update 6.

Release Details

KB articles that describe the fixes in each release are available as follows:

Exchange Server 2016 Cumulative Update 5 does not include new updates to Active Directory Schema. If upgrading from an older Exchange version or installing a new server, Active Directory updates may still be required. These updates will apply automatically during setup if the logged on user has the required permissions. If the Exchange Administrator lacks permissions to update Active Directory Schema, a Schema Admin must execute SETUP /PrepareSchema prior to the first Exchange Server installation or upgrade. The Exchange Administrator should execute SETUP /PrepareAD to ensure RBAC roles are current.

Exchange Server 2013 Cumulative Update 16 does not include updates to Active Directory, but may add additional RBAC definitions to your existing configuration. PrepareAD should be executed prior to upgrading any servers to Cumulative Update 16. PrepareAD will run automatically during the first server upgrade if Exchange Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU16, 2016 CU5) or the prior (e.g., 2013 CU15, 2016 CU4) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What’s New in Exchange Server 2016 and Exchange Server 2016 Release Notes. You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post is published.

The Exchange Team

Exchange Server Edge Support on Windows Server 2016 Update

$
0
0

Today we are announcing an update to our support policy for Windows Server 2016 and Exchange Server 2016. At this time we do not recommend customers install the Exchange Edge role on Windows Server 2016. We also do not recommend customers enable antispam agents on the Exchange Mailbox role on Windows Server 2016 as outlined in Enable antispam functionality on Mailbox servers.

Why are we making this change?

In our post Deprecating support for SmartScreen in Outlook and Exchange, Microsoft announced we will no longer publish content filter updates for Exchange Server. We believe that Exchange customers will receive a better experience using Exchange Online Protection (EOP) for content filtering. We are also making this recommendation due to a conflict with the SmartScreen Filters shipped for Windows, Microsoft Edge and Internet Explorer browsers. Customers running Exchange Server 2016 on Windows Server 2016 without KB4013429 installed will encounter an Exchange uninstall failure when decommissioning a server. The failure is caused by a collision between the content filters shipped by Exchange and Windows which have conflicting configuration information in the Windows registry. This collision also impacts customers who install KB4013429 on a functional Exchange Server. After the KB is applied, the Exchange Transport Service will crash on startup if the content filter agent is enabled on the Exchange Server. The Edge role enables the filter by default and does not have a supported method to permanently remove the content filter agent. The new behavior introduced by KB4013429, combined with our product direction to discontinue filter updates, is causing us to deprecate this functionality in Exchange Server 2016 more quickly if Windows Server 2016 is in use.

What about other operating systems supported by Exchange Server 2016?

Due to the discontinuance of SmartScreen Filter updates for Exchange server, we encourage all customers to stop relying upon this capability on all supported operating systems. Installing the Exchange Edge role on supported operating systems other than Windows Server 2016 is not changed by today’s announcement. The Edge role will continue to be supported on non-Windows Server 2016 operating systems subject to the operating system lifecycle outlined at https://support.microsoft.com/lifecycle.

Help! My services are already crashing or I want to proactively avoid this

If you used the Install-AntiSpamAgents.ps1 to install content filtering on the Mailbox role:

  1. Find a suitable replacement for your email hygiene needs such as EOP or other 3rd party solution
  2. Run the Uninstall-AntiSpamAgents.ps1 from the \Scripts folder created by Setup during Exchange installation

If you are running the Edge role on Windows Server 2016:

  1. Delay deploying KB4013429 to your Edge role or uninstall the update if required to restore service
  2. Deploy the Edge role on Windows Server 2012 or Windows Servers 2012R2 (Preferred)

Support services is available for customers who may need further assistance.

The Exchange Team

Help us test Cloud Attachments in Outlook 2016 with SharePoint Server 2016

$
0
0

My name is Steven Lepofsky, and I’m an engineer on the Outlook for Windows team. We have released (to Insiders) support for Outlook 2016’s Cloud Attachment experience with SharePoint Server 2016. We need your help to test this out and give us your feedback!

So, what do I mean by “cloud attachments?” Let’s start there.

The Cloud Attachment Experience Today

Back when we shipped Outlook 2016, we included a refreshed experience for how you can add attachments in Outlook. To recap, here are a few of the new ways Outlook helped you to share your files and collaborate with others:

We added a gallery that shows your most recently used documents and files. Files in this list could come from Microsoft services such as OneDrive, OneDrive for Business, SharePoint hosted in Office 365 or your local computer. When you attach these files, you have the option of sharing a link to the file rather than a copy. With the co-authoring power of Microsoft Office, you can collaborate in real time on these documents without having to send multiple copies back and forth.

Image

Is the file you’re looking for not showing up in the in the recent items list? Outlook includes handy shortcuts to Web Locations where your file might be stored:

Image

And in a recent update, we gave you the ability to upload files directly to the cloud when you attach a file that is stored locally:

Image

Adding Support for SharePoint Server 2016

Until now, Cloud Attachments were only available from Office 365 services or the consumer version of OneDrive. We are now adding the ability to connect to SharePoint Server 2016, so you can find and share files from your on-premises SharePoint server in a single click. We’d love your help testing this out before we roll it out to everyone!

The new experience will match what we have today, just with an additional set of locations. Once setup, you’ll have new entries under Attach File -> Browse Web Locations. These will show up as “OneDrive for Business” for a user’s personal documents folder, and “Sites” for team folders.

Note: If you also happen to be signed in to any Office365 SharePoint or OneDrive for Business sites under File -> Office Account, both sites may show up. The difference will be that the Office 365 versions will have branding for your company. For example, it may say “OneDrive – Contoso” rather than “OneDrive for Business”, or “Sites – Contoso” rather than “Sites.”

Image

You’ll be able to upload locally attached files to the OneDrive for Business folder located on your SharePoint Server.

Image

And, of course, you’ll see recently used files from your SharePoint server start to show up in your recently used files list.

Image

How to get setup

Here are the necessary steps and requirements to start testing this feature out:

  1. This scenario is only supported if you are also using Exchange Server 2016. You’ll need to configure your Exchange server to point to your SharePoint Server 2016 Internal and/or External URLs. See this blog post for details: Configure rich document collaboration using Exchange Server 2016, Office Online Server (OOS) and SharePoint Server 2016
  2. You’ll need Outlook for Windows build 16.0.7825.1000 or above.
  3. Ensure that your SharePoint site is in included in the Intranet zone.
  4. Optional: Ensure that crawling is enabled so that your documents can show up in the recent items gallery. Other features such as uploading a local attachment to your site will work even if crawling is not enabled. See this page for more details: Manage crawling in SharePoint Server 2013

Once enrolled, any mailbox that boots up Outlook and is configured with your SharePoint Server’s information per step #1 above will start to see the new entry points for the server.

We hope you enjoy this sneak peek, and please let us know how this is working for you in the comments below!

Steven Lepofsky

Sent Items behavior control comes to Exchange Online user mailboxes

$
0
0

It has been a while since we blogged about the ability to control the behavior of Sent Items for shared mailboxes when users either send as or on behalf of shared mailboxes. Today, we are glad to share with you that this feature is currently rolling out for User mailboxes also! What does that mean in real life?

Let’s say you have the following scenario:

  • Mary is a delegator/manager on the team
  • Rob is a delegate on Mary’s mailbox; Rob has Send As or Send on behalf of rights on Mary’s mailbox.
  • When Rob sends an email as Mary, the email will be only in Rob’s Sent Items folder

With this feature enabled on Mary’s mailbox, Exchange will copy the message that Rob sends as Mary to the Sent Items folder in Mary’s mailbox. In other words, both Rob and Mary will have the message in their Sent Items folders.

We have heard this request more than once, and now we are rolling it out to an Exchange Online mailbox near you! The configuration and behavior of the feature is the same as for the shared mailbox.

Note: If the user has used the Outlook 2013 feature to change the folder that Sent Items are saved to, the messages will be copied to that folder instead of the user’s Sent Items folder. Users can reconfigure this by clicking the Save Sent Items To button on the Email Options tab.

To… For messages… Use this command…
Enable message copy Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $true

Enable message copy Sent On Behalf of the delegator For emails Sent On Behalf of the delegator:

Set-Mailbox <delegator mailbox name> –MessageCopyForSendOnBehalfEnabled $true

Disable message copy Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $false

Disable message copy Sent On Behalf of the delegator For emails Sent On Behalf of the delegator:

Set-Mailbox <delegator mailbox name> –MessageCopyForSendOnBehalfEnabled $false

Note, you can use the Office 365 Portal to configure this for shared mailboxes, but to configure user mailboxes, you’ll need to use PowerShell (the team would like to hear if you feel it should be in the Portal too!). For various other details of behavior please see the shared mailbox post.

Next question that some might have is: What about on-premises? We know you want to use this on premises also, and will update when we have more details!

Enjoy!

The Calendaring Team

test table

$
0
0
To… For messages… Use this command…
Enable message copy Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $true

Disable message copy Sent As the delegator

Set-Mailbox <delegator mailbox name> –MessageCopyForSentAsEnabled $false

 

Send-as and Send-on-behalf of for groups in Outlook

$
0
0

Today, we are excited to announce the ‘Send-as’ and Send-on-behalf of feature for groups in Outlook, which brings you one step closer to turning your email into a great customer support solution.

With the new ‘Send as’ and ‘Send on behalf of’ feature, members of the group can respond to conversations using the shared identity of the Group instead of their individual personal identity – without losing the personal, individual touch. Because sometimes, that’s just what you need.

Like other groups in Outlook, members can read all messages sent to the group. But with this feature turned on, responses look like they come from the group rather than the individual.

Here’s what Send on Behalf and Send As look like from the recipient’s perspective:

Send on Behalf Send As
clip_image002 clip_image004

If your business is looking for a lightweight, email-centric customer support solution, you’re in luck. This feature might be what you need. The consistent use of a single email address will help your customers develop recognition and trust—ensuring that your email messages are seen.

This feature is particularly helpful in scenarios where you want to set up a group to connect with external customers. Collective knowledge of group helps resolve those customer inquiries faster and everyone on the team benefits from shared knowledge of the Group.

Here are some example scenarios:

1. Support@Contoso.com can be set as group to receive all customer support inquiries. When your customers send email to this group, any member of the group could respond to inquiry in a timely fashion without disclosing their individual identity. Subsequent responses from the customer also go back to the group, keeping all information in one place and making it faster for support representatives to respond to new inquiries. Additionally, because all of the group conversation history is available, other team members will be able to see that specific customer emails have already been answered to.

The support team member would see the following:

image

The recipient (customer) would see the following:

image

2. Some organizations may also want to use ‘Send as’ or ‘Send on behalf of’ for an internal group. For example, if you want all expense reports sent to a Billing department alias rather than bombarding a specific person.

Billing@contoso.com can be set up as a group to receive all your organization’s billing inquiries. Individuals who work in the billing department and are a part of this group can respond back as the Billing department identity.

Sound like what your business needs? Learn how to turn it on.

Allow members to send as or send on behalf of an Office 365 Group – Admin help
Send email from or on behalf of an Office 365 group

The Groups Team

Accessing public folder favorites

$
0
0

Introduction

Seeing that Outlook desktop and Outlook on the web (or OWA, depending on version) do not support the same types of public folders (or folders added to Favorites) we wanted to talk about what is expected behavior when public folders are used. We have seen some questions around this so – let’s clear it up!

Public folder types supported by different clients

Outlook supports public folders of following types:

  • Calendar
  • Contact
  • InfoPath Form
  • Journal
  • Mail and Post
  • Note
  • Task

OWA supports only the following public folder types:

  • Mail and Post
  • Calendar
  • Contact

Adding public folder to favorites using Outlook or OWA

Adding public folders to Favorites is slightly different depending on the client. Please see this article which explains how to do it in the respective client.

Things to keep in mind

OWA will only support folders types such as Mail, Contact and Calendar. Support for public folders with folder type Tasks and others are not available in OWA even though they can be added to Favorites using Outlook. If OWA does not support a specific folder type added to Favorites by Outlook, it will display the folder, but it will be greyed-out.

Another behavior that is very different in OWA is that OWA does not have a common view for different folders types like Outlook does. When the user tries to add the public folder to favorites using OWA, depending on the folder type, the user may not see it added in the default Favorites view in OWA, but the folder might already be added to the respective app launcher tab.

To understand this better let’s consider a scenario where you create the public folder, which has the item type set to “Contact items”. When you add the specific public folder to PF Favorites using OWA, it will list the folder type as highlighted in the following screenshot:

image

This means is that you need to go to the corresponding section such as Mail, Calendar or People, to access different types of folders that were added to Favorites.

In case of the folder of type Contact, it will be placed in the People tab as shown below:

image

Regular folders containing Mail items will continue to be added to the regular Favorites folder in OWA.

If the public folder being added using OWA is of folder type Calendar, then the calendar will be populated in the Other Calendars section:

image

If the added public folder needs to be removed from Favorites, right click the relevant public folder and select option remove from Favorites.

Public folder Favorites sync between Outlook and OWA

There is a sync of public folder favorites which happens between Outlook and OWA in which the public folder (supported types) added to the Favorites using Outlook will sync to OWA and vice versa.

Important: For this public folder Favorites sync feature to work between Outlook and OWA, Outlook client should be fully updated. There were some known issues with Favorites sync with older versions of Outlook clients so updating is important.

Any supported folder type added to public folder Favorites using Outlook will sync to OWA and will show in PF Favorites; in similar fashion, any supported folder added using OWA to the public folder favorites will be automatically added to the public folder favorites section in Outlook client.

The only additional consideration when adding Mail folder to Favorites in Outlook is:

  1. You need to add the desired public folder to the public folder favorites using the method which has been discussed earlier.
  2. Public folders of Mail type need to be additionally added to Default Favorites section in Outlook by selecting the option “Show in Favorites”. Once this option is selected the respective folders will sync up to OWA and automatically appear in OWA Favorites.

image

Note: This option is only available for Mail folders (message class of IPM.Post); public folders of different type will not be seen under the Default Favorites section in Outlook client.

As far as the other direction of the sync goes: public folder added to Favorites using OWA will sync to the Outlook client and will show in the default public folder Favorites, but will not appear in Default favorites in Outlook client

To recap

  1. Public folders added as Favorites using Outlook client will auto-populate in OWA in respective navigation tabs. This applies to folder types such as Mail, Contacts and Calendars.
  2. Public folders added as Favorites using OWA get automatically added to the Favorites section in Outlook client due to bi-directional sync.
  3. Public folders of Mail type can be auto-populated in OWA Default Favorites view by selecting the option “Show in Favorites” in the Outlook client.
  4. Removing the public folder from Favorites list using OWA will not remove it from Favorite list in Outlook client. It will need to be cleared manually from Outlook.

I hope readers find this post useful! I would like to thank the Public Folder Crew for their help reviewing the blog post. I would also like to say thanks to Nino Bilic and Scott Oseychik for their help in getting this blog post ready for publishing.

Siddhesh Dalvi
Support Escalation Engineer
Exchange Online Escalations


2nd call for public folders to O365 Groups migrations

$
0
0

We got some replies to our previous post on the subject, but wanted to reach out again as we want to make sure we validate this scenario well. Therefore, here is an updated request:

If you are using Public Folders (legacy or modern) and would like to migrate some of them to Office 365 Groups, we are working on a solution for that. We are starting with the migration of Calendar and Mail folders and will move on to other types as we complete work on those. We would like to have customers who would like to try this migration to provide feedback. Please email us with below information if interested. You can also send us your information if you would like to try migrating other types of public folders (other than Calendar and Mail) as we extend the support to those folder types. But our immediate work is related to Calendar and Mail.

Drop us an email at: pftogroupmigration@service.microsoft.com

  • Customer name:
  • Tenant domain name in Exchange Online:
  • Location of public folders; on-premises or Exchange Online:
  • If on-premises, Exchange version of public folder servers:
  • Public folder types to migrate (Mail, Calendar – sooner; Task, Contact – later on):

Your organization might need to join our TAP program (depending on public folder location) – and if so, we will share those details with you after reviewing the above.

A little update to provide a timeline: as part of this, we are ready to start migrating Exchange Online (EXO) public folders to Groups right away, with legacy / on-premises public folders following within a few months.

Public Folder Migration team

Demystifying Certificate Based Authentication with ActiveSync in Exchange 2013 and 2016 (On-Premises)

$
0
0

Some of the more complicated support calls we see are related to Certificate Based Authentication (CBA) with ActiveSync. This post is intended to provide some clarifications of this topic and give you troubleshooting tips.
What is Certificate Based Authentication (CBA)? Instead of using Basic or WIA (Windows Integrated Authentication), the device will have a client (user) certificate installed, which will be used for authentication. The user will no longer have to save a password to authenticate with Exchange. This is not related to using SSL to connect to the server as we assume that you already have SSL setup. Also, just to be clear (as some people have those things confused) CBA is not two-factor authentication (2FA).

How does the client certificate get installed on the device? There’s several MDM (Mobile Device Management) solutions to install the client certificate on the device.

The most important part of working with CBA is to know where the client certificate will be accepted (or ‘terminated’). How you implement CBA will depend on the response to following questions:

  • Will Exchange server be accepting the client certificate?
  • Will an MDM or other device using Kerberos Constrained Delegation (KCD) be accepting the client certificate?

You can choose only one. You can’t have both Exchange and a device accepting the client certificate.

This post assumes that the user certificates have already been deployed in AD before CBA was implemented. The requirements for user certificates are documented here: Configure certificate based authentication in Exchange 2016.

If Exchange Server is accepting the client certificate

This configuration is simple and is fully documented in the following link that applies to Exchange 2013/2016. The configuration for legacy versions follows the IIS configuration steps. The overall functionality of CBA has not changed across versions however the requirements may vary.

Configuration of CBA is done via IIS Manager. The overall steps are: Installing Client Certificate Mapping Authentication feature on all CAS servers, enabling client certificate authentication, setting SSL client certificates to “required” and disabling other authentication methods and finally enabling client certificate mapping on the virtual directory,

Important Notes:

  1. You cannot use multiple authentication methods and have client certificates enabled on the virtual directory. The client must either use client certificate or username and password to authenticate, not both.
  2. SSL settings should be set to “Require” not “Accept”. You can have connection failures if set improperly.

If MDM or another device is accepting the certificate and using KCD to authenticate the client device

What is important to note here is the client certificate will be accepted at the device, therefore, you would NOT configure client certificates on Exchange.

  • Each vendor should have updated documentation to work with current Exchange version.
  • To accept the client certificate, the MDM would require that KCD be configured to authenticate to Active Directory.
  • Most vendors expect Windows Integrated Authentication configured on IIS/Exchange. This would allow the authentication to be passed without any additional prompts to the client device. All other authentication methods would be disabled.

Note: When you enable Integrated authentication on Exchange, you should ensure that the authentication “Providers” have both NTLM and Negotiate enabled in IIS Manager.

image

Overall authentication process when client certificate is accepted by MDM:
  1. The client device contacts MDM with a client certificate that contains UPN in the Subject Alternative Name section of the certificate
  2. The MDM authenticates the user with Active Directory
  3. KCD issues a Ticket to the MDM with user’s credentials
  4. MDM sends users credentials to Exchange with Windows Integrated (only) configured on Exchange.
  5. Exchange response to the MDM with the mail data.
  6. MDM responds to the client with mail data.

Coexistence with Exchange, when Exchange is accepting the client certificate

When adding 2013/2016 to the environment and Exchange server 2013/2016 is accepting the client certificate, it’s important to disable any client certificate configuration on the legacy CAS. This is because the client certificate will not be proxied to the legacy server. The authentication on Legacy CAS would go back to default of Basic on “Microsoft-Server-ActiveSync” virtual directory, and “Windows Integrated” on subfolder named “Proxy”.

image

image

Troubleshooting

Here are some troubleshooting steps!

If Exchange Server is accepting the client certificate

If Exchange is configured to accept the client certificate, use the IIS logs and look for requests for /Microsoft-Server-ActiveSync. Determine the error code that is returned. IIS error codes are found here.

  • Verify the UPN configured on the “Subject Alternative Nameportion of the client certificate. In ADUC, click “View, Advanced Features”, locate the user account and select “Published Certificates”, click “Details” tab.
  • Client certificates and SSL “Required” should not be enabled on the Default Web Site, only on the MSAS “Microsoft-Server-ActiveSync” virtual directory.
  • Verify there are no additional authentication methods enabled on the MSAS virtual directory. See “Step 4” in Configure certificate based authentication in Exchange 2016

If MDM is accepting client certificate

  • With MDM vendor, verify that KCD is working correctly, by checking security logs on MDM to verify Kerberos is working.
  • Verify if the request is getting to Exchange by looking at the IIS logs requests for /Microsoft-Server-ActiveSync.
  • Verify Windows Integrated (only) is enabled on Exchange.

Attachments

If users have issues with attachments, follow “Step 7” in Configure certificate based authentication in Exchange 2016

Troubleshooting Logs and Tools

Use the IIS logs to determine if the device reached the Exchange server. Look for requests to /Microsoft-Server-ActiveSync virtual directory.

Refer to The HTTP status code in IIS 7.0, IIS 7.5, and IIS 8.0 KB for information on the various error codes in the IIS logs. Example of IIS error code: 403.7 – Client certificate required. From this you would verify that the device has the client certificate installed.

  • IIS Logs – IIS logs can be used to review the connection for Microsoft-Server-ActiveSync. More info here.
  • Log Parser Studio – Log Parser Studio is a GUI for Log Parser 2.2. LPS greatly reduces complexity when parsing logs. Download it here

I wanted to thank Jim Martin for technical review of this post.

Charlene Stephens (Weber)

Office 365 Directory Based Edge Blocking support for on-premises Mail Enabled Public Folders

$
0
0

Until now, our on-premises customers who use  Mail Enabled Public Folders (MEPF) could not use services like Directory Based Edge Blocking (DBEB). If DBEB is enabled, any mails sent to Mail Enabled Public Folders (MEPF) will be dropped at the service network perimeter. This is because, DBEB queries Azure Active Directory (AAD) to find out if a given mail address is valid or not. Because Mail Enabled Public Folders (MEPF) are not synced to Azure Active Directory, all MEPF address are considered as invalid by DBEB. Sender of the mail to MEPF would receive following NDR:

‘550 5.4.1 [<sampleMEPF>@<recipient_domain>]: Recipient address rejected: Access denied’.

To resolve this issue, in the latest Azure AD Connect tool update, we are introducing an option to synchronize MEPFs from on-premises AD to AAD. Admins can do this through the newly introduced option – ‘Exchange Mail Public Folders’ in Optional Features page of Custom installation during Azure AD Connect tool installation/upgrade.

When you select this option, and performs a full sync, all the Mail Enabled Public Folders from on-prem AD(s) will be synced to AAD. Once synced, you can enable DBEB. Mail Enabled Public Folders addresses will no longer considered invalid addresses by DBEB. And messages will be delivered to them like they are delivered to any other recipient.

Details of version of AAD Connect tool required

This feature is available in 1.1.524.0 (May 2017) version or any later versions of Azure AD Connect tool.

Azure AD Connect tool can be downloaded from following location: Download Azure AD Connect.

For more details, here is the link for version history of Azure AD Connect

IMPORTANT NOTES:

  • Directory Based Edge Blocking is not yet supported for Mail Enabled Public Folders hosted in Exchange Online. Current feature enables DBEB support only for Mail Enabled Public Folders hosted On-premises.
  • For Exchange Online Protection (EOP) Standalone i.e., customers who have only Exchange on-premises configured but no presence in Exchange Online, and no “advanced” features of EOP, this synchronization through AAD Connect tool is enough for DBEB to work.
  • For Exchange Online (ExO) & EOP i.e., customers who have both on-premises Exchange & Exchange Online configured, or who are using features such as DLP or ATP, this feature does not create the actual public folder objects in the Exchange Online directory. Additional synchronization via PowerShell is required for DBEB to work if you are using Exchange Online.
  • For customers who are planning to migrate Public Folders from on-premises to Exchange Online: nothing in the migration procedure has changed with this feature support. One extra point you should take care of before starting Public Folder migration to EXO is – ensure ‘Exchange Mail Public Folder’ option in Azure AD Connect tool is *not* checked. If it is checked, uncheck it before you start migration. By default, it will be unchecked.

Customers who had a work-around in place

There were some customers who did not want to disable DBEB despite having Mail Enabled Public Folders. These customers have opted for a work-around of creating MSOL objects (like EOPMailUser, MailUser or MailContact) in Azure Active Directory with same SMTP addresses as Mail Enabled Public Folders so that these addresses are considered as valid addresses by DBEB. Customers who opted for this work-around are requested to remove all such MSOL objects before performing the sync of Mail Enabled Public Folders through AAD Connect tool. If the ‘impersonation objects’ have not been removed prior to the new synchronization, they are likely to cause a soft-match error. In soft-match error case, sync of Mail Enabled Public Folder from on-prem AD to Azure Active Directory will not succeed, and an email similar to the following will be received:

“Identity synchronization Error Report: <Date>”

Unable to update this object because the following attributes associated with this object have values that may already be associated with another object in your local directory services: [ProxyAddresses SMTP:SampleMEPF@mail.contoso.com,smtp:SampleMEPF@contoso.com;Mail SampleMail@mail.contoso.com;]. Correct or remove the duplicate values in your local directory. Please refer to http://support.microsoft.com/kb/2647098 for more information on identifying objects with duplicate attribute values.

As mentioned in the description, you can correct or remove the entries with duplicate SMTP address. Below are corresponding links for each scenario:

Once the objects have been cleaned up, performing a full sync will ensure Mail Enabled Public Folders are successfully synced to Azure Active Directory. More info here: http://support.microsoft.com/kb/2647098.

Public Folder Team

Deep Dive: How Hybrid Authentication Really Works

$
0
0

A hybrid deployment offers organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. A hybrid deployment provides the seamless look and feel of a single Exchange organization between an on-premises Exchange organization and Exchange Online in Microsoft Office 365. In addition, a hybrid deployment can serve as an intermediate step to moving completely to an Exchange Online organization.

But one of the challenges some customers are concerned about is that this type of deployment requires that some communication take place between Exchange Online and Exchange on-premises. This communication takes place over the Internet and so this traffic must pass through the on-premises company firewall to reach Exchange on-premises.

The aim of this post is to explain in more detail how this server to server communication works, and to help the reader understand what risks this poses, how these connections are secured and authenticated, and what network controls can be used to restrict or monitor this traffic.

The first thing to do is to get some basic terminology clear. With the help of TechNet and other resources, here are some basic definitions;

  • Azure Authentication Service – The Azure Active Directory (AD) authentication Service is a free cloud-based service that acts as the trust broker between your on-premises Exchange organization and the Exchange Online organization. On-premises organizations configuring a hybrid deployment must have a federation trust with the Azure AD authentication service. You may have heard of this referred to previously as the Microsoft Federation Gateway, and while speaking purely technically the two are quite different, they are different implementations of essentially what is the same thing. So, to avoid confusion, we shall refer to both as the Azure Active Directory (AD) authentication Service, or Azure Auth Service for short.
  • Federation trust – Both the on-premises and Office 365 service organizations need to have a federation trust established with the Azure AD authentication service. A federation trust is a one-to-one relationship with the Azure AD authentication service that defines parameters and authentication statements applicable to your Exchange organization.
  • Organization relationships – Organization relationships are needed for both the on-premises and Exchange Online organization and are configured automatically by the Hybrid Configuration Wizard. An organization relationship defines features and settings that are available to the relationship, such as whether free/busy sharing is allowed.
  • Delegated Auth (DAuth) – Delegated authentication occurs when a network service accepts a request from a user and can obtain a token to act on behalf of that user to initiate a new connection to a second network service.
  • Open Authorization (OAuth) – OAuth is an authorization protocol – or in other words, a set of rules – that allows a third-party website or application to access a user’s data without the user needing to share login credentials.

A History Lesson

Exchange has had a few different solutions for enabling inter-organization connectivity, which is essentially what a hybrid deployment is; two physically different Exchange orgs (on-premises and Exchange Online) appearing to work as one logical org to the users.

One of the most common uses of this connectivity is to provide users the ability to share free/busy information, so that’s going to be the focus of the descriptions used here. Of course, hybrid also allows users to send secure email to each other, but that rarely seems to come up as a concern as every org let’s SMTP flow in and out without much heartache, so we won’t be digging into that here. There are other features you get with hybrid, such as MailTips, but these use the same underlying protocol flow as Free/Busy, so if you know how Free/Busy works, you know how they work too.

So, one of the first cross-premises features we released was cross-forest availability. If the two forests did not have a trust relationship then each admin created a service account, gave that service account permissions to objects in their own forest (calendars in this case), and then gave those credentials to the other organization’s admins. If the forests were trusted each admin would instead give permissions to the Client Access Servers from the remote forest to read Free/Busy in their own forest.

Each org admin would then add an Availability Address space object to their own Exchange org, with the SMTP domain details for the other forest, and in the case of this being the untrusted forest case, provide the pre-determined creds for that forest. The admins also had to sync directories between the orgs (or import contacts for users in the remote forest) too. That was a hassle. But, once they did that, lookups for users who had a contact object in the forest triggered Exchange to look at the cross-forest availability config, and then use the previously obtained credentials or server permissions to make a call to the remote forest to request free/busy information.

The diagram below shows this at a high level for the untrusted forest version of this configuration.

hybridauth1

Clearly there were some shortcomings with this approach. Directory sync is a big requirement for most organizations, credentials had to be exchanged and managed. Connections were directly from server to server, AutoDiscover had to be set up and working as it was used to find the correct EWS endpoints in the remote org, but one thing some customers liked was that these connections could be pre-authenticated with an application layer firewall (TMG back in the day was very popular) as creds were used in a Basic handshake, encrypted by SSL.

These shortcomings lead us to design a new approach, that allowed two servers to talk to each other securely without having to exchange credentials or perform a full directory sync.

DAuth

Exchange 2010 and later versions of Exchange were built to use this thing called the Azure Auth Service, an identity service that runs in the cloud, to be used as a trust broker for federating organizations, enabling them to share information with other organizations.

Exchange organizations wanting to use federation establish a one-time federation trust with the Azure Auth Service, allowing it to become a federation partner to the Exchange organization. This trust allows servers, on behalf of users authenticated by Active Directory (the identity provider for on-premises users) to be issued Security Assertion Markup Language (SAML) On-Behalf-Of Access Tokens by the Azure Auth Service. These On-Behalf-Of Access Tokens allow users from one federated organization to be trusted by another federated organization. The Organization Relationship or sharing policy that must also be set up governs the level of access partner users have to the organization’s resources.

With the Azure Auth Service acting as the trust broker, organizations aren’t required to establish multiple individual trust relationships with other organizations and can instead do the one-time trust, or Federation configuration, and then establish Organization Relationships with each partner organization.

The trust is established by submitting the organization’s public key certificate (this certificate is created automatically by the cmdlet used to create the trust) to the Azure Auth Service and downloading the Azure Auth Service’s public key. A unique application identifier (ApplicationUri) is automatically generated for the new Exchange organization and provided in the output of the New Federation Trust wizard or the New-FederationTrust cmdlet. The ApplicationUri is used by the Azure Auth Service to identify your Exchange organization.

This configuration allows an Exchange Server to request an On-Behalf-Of Access Token for a user for the purposes of making an authenticated request to an Exchange Server in a different organization (a partner, or perhaps an Exchange Server hosted in Office 365 in the case of hybrid), by referencing their ApplicationUri.

When the on-premises admin then adds an organization relationship for a partner org, Exchange reaches across to the remote Exchange Organization anonymously to the /AutoDiscover/AutoDiscover.svc end-point using the “GetFederationInformation” method to read back relevant information such as the Federated domains list, their ApplicationUri, etc.

Here’s an example of the entry in the cloud, for Contoso’s hybrid Exchange deployment. You can see we know the AutoDiscover endpoint in the on-premises Exchange organization based on this, and what can be done with this agreement.

DomainNames : {contoso.com}
FreeBusyAccessEnabled : True
FreeBusyAccessLevel : LimitedDetails
FreeBusyAccessScope :
MailboxMoveEnabled : False
MailboxMoveDirection : None
DeliveryReportEnabled : True
MailTipsAccessEnabled : True
MailTipsAccessLevel : All
MailTipsAccessScope :
PhotosEnabled : False
TargetApplicationUri : FYDIBOHF25SPDLT.contoso.com
TargetSharingEpr :
TargetOwaURL : https://mail.contoso.com/owa
TargetAutodiscoverEpr: https://autodiscover.contoso.com/autodiscover/autodiscover.svc/WSSecurity

And the same command when run on-premises results in pretty much the same information with the notable differences seen here:

TargetApplicationUri : outlook.com
TargetOwaURL : http://outlook.com/owa/contoso.onmicrosoft.com
TargetAutodiscoverEpr : https://autodiscover-s.outlook.com/autodiscover/autodiscover.svc/WSSecurity

Now when a user (Mary in our picture below) in Contoso’s On-Premises Exchange environment requests free/busy for a user (Joe) in Contoso’s online tenant, (or for a partner org for which there is an organization relationship, this flow works the same), here’s what happens.

hybridauth2

  1. The on-premises contoso.com Exchange Server determines that target user is external and does a lookup for the Organizational Relationship details to find where the send the request.
  2. The on-premises contoso.com Exchange Server submits a token request to the Azure Auth Service for an On-Behalf-Of Access Token for contoso.onmicrosoft.com, referencing contoso.microsoft.com’s ApplicationUri (which of course it knows because of the creation of the Org Relationship), the SMTP address of the requesting user, and the purpose/intent of the request (Free/Busy in this case). This request is encrypted using the Azure Auth Service’s public key and signed using the on-premises organization’s private key, thereby proving where the request is coming from.
  3. The Azure Auth Service returns an On-Behalf-Of Access Token to the server in contoso.com, signed with its own Private Key (to prove where it came from) and the On-Behalf-Of Access Token in the payload is encrypted using the public key of contoso.microsoft.com (which Azure Auth has because contoso.microsoft.com provided it when they set up their own Federation Trust.
  4. The on-premises contoso.com Exchange Server then submits that token as a SOAP request to contoso.onmicrosoft.com’s AutoDiscover AutoDiscover/AutoDiscover.svc/wsssecurity endpoint (which it had stored in its Org Relationship config for the partner. The connection is anonymous at the HTTP/network layer, but conforms to WS-Security norms (see References at the end of this document for details on Ws-Security). Note: This step is ignored if TargetSharingEPR is set on the Org Relationship object as that specifies explicitly the EWS endpoint for the target Org.
  5. The contoso.onmicrosoft.com Exchange Server validates the signed and encrypted request (this is done at the Windows layer using the Windows Communication Framework (WCF) – Exchange just passes to the WCF layer (telling it about its keys and issuer information it has based on the setup of the federation trust) and then assuming it passes the WCF sniff test contoso.onmicrosoft.com’s Exchange Server returns the EWS URL for the Free/Busy request to be submitted to. (Don’t forget that only the Exchange Servers in contoso.microsoft.com have the necessary private key to decrypt the auth token to understand what it really is).
  6. The request and auth token is then submitted directly from Exchange in contoso.com to the EWS endpoint of Exchange in contoso.onmicrosoft.com.
  7. We do the same validation of the signed and encrypted request we did before as it’s now hitting a different endpoint on Exchange in contoso.onmicrosoft.com, once done the server sees that this is a free/busy request from contoso.com (again based on ApplicationUri, contained within the token).
  8. The Exchange Server in contoso.onmicrosoft.com extracts the e-mail address of the requesting user, splits-up the user from the domain part, and checks the latter against its domain authorization table (based on the Org Relationships configured in the org) if this domain can receive the requested free/busy information. These requests are allowed/denied on a per-domain basis only – if the domain of the requesting user is contained in the Org Relationship then it’s ok to return Free/Busy and only Default calendar permissions are evaluated.
  9. The server in contoso.onmicrosoft.com responds by providing the free/busy data. Or not. If it wasn’t authorized to do so.
  10. The on-premises contoso.com server returns the result to the requesting client.

What do you need to allow in through the firewall for this to work then? You need to allow inbound TCP443 connections to /autodiscover/autodiscover.svc/* and to/ews/* for the actual requests.

This is key – only the receiving Exchange server has the cert required for decrypting the On-Behalf-Of Access Token, so while you might be ok to unpack the TLS for the connection itself on a load balancer or firewall, the token within it is still encrypted to protect it from man in the middle attacks. If you were to install the private key and some smarts on a firewall device, you could open it but all you’d see is a token with values that only make sense to Exchange (the values agreed upon during creation of the Federation Trust). So if you want to verify this token really did come from Azure Auth Service, all you really need to do is verify the digital signature to ensure it was signed by the Azure Auth Service. When a message is signed, it is nearly impossible to tamper with the message but message signing alone does not protect the message content itself from being seen. Using the signature, the receiver of the SOAP message can know that the signed elements have not changed en route. Anything more than that, such as decrypting the inner token would require an awful lot of Exchange specific information, which might lead you to conclude the best place to do this is Exchange.

Now onto OAuth

So firstly, why did we move away from DAuth and switch to using OAuth?

Essentially, we made some architectural changes in the Azure Auth Service and WCF was falling out of favor and not the direction Microsoft was using as the framework for service-orientated applications. We had built something that was quite custom, and wanted to move to a more open-standards based model. OAuth is that.

So how does OAuth work at a high level?

At a high-level OAuth uses the same Trust Broker concept as DAuth, each Exchange organization trusts the Azure Auth Service, and tokens from that service are used to authorize requests, proving their authenticity.

There are several noteworthy differences between DAuth and OAuth.

The first is that OAuth provides the ability to allow a server with the resource being requested to redirect the client (or server) requesting the data to the trusted issuer of access tokens. It does this when the calling server or client sends an anonymous call with an empty value in the HTTP Bearer header – this is what tells the receiving server that the client supports OAuth, triggering the redirection response, sending the client to the server that can issue access tokens.

The second thing to note is that the Exchange implementation of OAuth for Server to Server Auth we call S2S OAuth 2.0 and we have documented it in detail here. This document explains a lot of detail about what is contained in the token, so if you’re interested, that’s the document to snuggle up with. As you’ll see we don’t use this redirection mentioned above for our server to server hybrid traffic but it’s good to know it’s there as it helps understand OAuth more broadly.

Here’s an extract directly from the protocol specification (linked to later in this document) which provides a great example of OAuth in practice. In this example, this is the response received when one server tries to access a resource on another server in the same hybrid org.

HTTP/1.1 401 Unauthorized
Server: Fabrikam/7.5
request-id: 443ce338-377a-4c16-b6bc-c169a75f7b00
X-FEServer: DUXYI01CA101
WWW-Authenticate: Bearer client_id=”00000002-0000-0ff1-ce00-000000000000″, trusted_issuers=”00000001-0000-0000-c000-000000000000@*”
WWW-Authenticate: Basic Realm=””
X-Powered-By: ASP.NET
Date: Thu, 19 Apr 2012 17:04:16 GMT
Content-Length: 0

Following this response, the requesting server then sends its credentials to the indicated token issuer in the response above (trusted_issuers=”00000001-0000-0000-c000-000000000000@*”), which is an endpoint it knows about because it too has an AuthServer object with that same id. That token broker authenticates the client and issues access and refresh tokens to the requestor. Then the requestor uses the access token to access the resource it requested on the server.

Below is an example of this, from the same specification document. In this example, the requestor went to the Trusted Issuer referred to in the example above, and that issuer authenticated the requestor and issued an access token for the server allowing it to request the data. The requestor then would use this token to access the resource it originally requested on the remote server.

This is example of a JWT actor (JSON Web Token) token issued by an STS. For more information about the claim values contained in this security token, see section 2.2 of the specification document.

actor:
{
“typ”:”JWT”,
“alg”:”RS256″,
“x5t”:”XqrnFEfsS55_vMBpHvF0pTnqeaM”
}.{
“aud”:”00000002-0000-0ff1-ce00-000000000000/contoso.com@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“iss”:”00000001-0000-0000-c000-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“nbf”:”1323380070″,
“exp”:”1323383670″,
“nameid”:”00000002-0000-0ff1-ce00-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″,
“identityprovider”:”00000001-0000-0000-c000-000000000000@b84c5afe-7ced-4ce8-aa0b-df0e2869d3c8″
}

Back to differences between DAuth and OAuth – A notable difference between the two is that OAuth tokens are not encrypted. The token is also passed as header information, not as part of the body. There is therefore a reliance upon SSL/TLS (hereafter just referred to as TLS) to protect the traffic in transport.

And the last thing to note is that we only use this flow for on-premises to Exchange Online (and vice-versa) relationships; this isn’t something we use for partner to partner relationships. So if you are hybrid with Exchange Online and have Partner to Partner Org Relationships too, you are using both DAuth and OAuth.

So how does OAuth work in the context of Exchange hybrid. Let’s start with what’s needed to set up the relationship to support this flow. The steps are in https://technet.microsoft.com/en-us/library/dn594521(v=exchg.150).aspx but all of this is now automatically performed in the newest versions of the Hybrid Configuration Wizard (HCW) – so even though that’s the only right way to do this, we’re just going to walk through what the wizard does so we understand what is really going on.

The HCW first adds a new AuthServer object to the on-premises AD/Exchange Org specifying the Azure OAuth Service endpoint to use. The AuthServer object is the OAuth equivalent of the Federation Trust object and it stores such things as the thumbprint of the Azure Auth Service’s signing cert, the token issuing endpoint, the AuthMetaDataUrl (which is where the information all comes from anyway, so that’s kind of a circular reference, isn’t it) and so on.

The HCW process creates a self-signed authorization certificate, the public key of which is passed to the Azure Auth Service and will be used by the Azure Auth Service to verify that token requests from the org are authentic. This and the on-premises AppID and other relevant information are stored in the AuthConfig object. This is the OAuth equivalent of the FederationTrust object we had in DAuth.

The HCW registers the well-known AppID for Exchange on-premises, the certificate details and all the on-premises URL’s Exchange Online might use for the connection as Service Principal Names in Azure Auth Service. This is simply telling Azure Auth Service that Exchange Online may request a token for those URL’s and that AppID, which prevents tokens for any arbitrary URL being requested. Exchange Online’s URL’s are managed automatically with Azure Auth Service, so there’s no need for the admin to add any URL’s for Exchange Online. Having both Exchange Online and On-Premises use the same AppID is part of the magic why from an auth point of view there is no difference between both environments for the Exchange Servers within them.

Then the HCW creates the IntraOrganizationConnector object, specifying the domains in the other organization and the DiscoveryEndpoint AutoDiscover URL used to reach them.

Note the name of this object, Intra…, this is for the connection between on-premises Exchange and Exchange Online for the same customer. This is not something for partner to partner communication.

So, we’re set up – how does it work when someone wants to go look at the free/busy of someone on the other side of that hybrid relationship?

hybridauth3

  1. Mary on-premises makes a Free/Busy request for Joe, a user in the contoso.onmicrosoft.com tenant.
  2. The on-premises Exchange Server determines that target user is external and does a lookup for an IntraOrganizationConnector to get the AutoDiscover endpoint for the external contoso.onmicrosoft.com organization (matching on SMTP domain).
  3. The on-premises Exchange Server makes an anonymous request to that AutoDiscover endpoint and the server responds with a 401 challenge, containing the ID for the trusted issuer from which it will accept tokens.
  4. The on-premises Exchange Server requests an Application Token from Azure Auth Service (the trusted issuer)Key: This Token is for Exchange@Contoso.com and can be cached. If another user on-premises does a Free/Busy request for the same external organization there is no round-trip to AAD, the cached token is used.
    1. It does this by sending a self-issued JSON (JWT) security token, asserting its identity and signed with its private key. The security token request contains the aud, iss, nameid, nbf, exp claims. The request also includes a resource parameter and a realm parameter. The value of the resource parameter is the Uniform Resource Identifier (URI) of the server.
    2. Azure Auth Service validates this request using the public key of the security token provided by the client.
    3. Azure Auth Service then responds to the client with a server-to-server security token that is signed with Azure Auth Service’s private key. The security token contains the aud, iss, nameid, nbf, exp, and identityprovider claims
  5. The on-premises Exchange Server then performs an AutoDiscover request using this token and retrieves the EWS endpoint for the target organization.
  6. The on-premises server then goes back to step 5 to request a token for the new audience URI, the EWS endpoint (unless this happens to be one and the same, which it will never be for Exchange Online users, but might be for on-premises users).
  7. The on-premises server then submits that new token to the EWS end point requesting the Free/Busy.
  8. Exchange Online authenticates the Access Token by lookup of the Application Identity and validates the server-to-server security token by checking the values of the aud, iss, and exp claims and the signature of the token using the public key of the Azure Auth Service.
  9. Exchange Online verifies that Mary is allowed to see Joe’s Free/Busy. Unlike DAuth, OAuth allows granular calendar permissions as the identity of the requesting user not just the domain is available to Exchange and so all permissions are evaluated.
  10. Free/Busy info is returned to the client.

What do you need to allow in through the firewall for this flow to work then? You need to allow TCP443 inbound connections to /autodiscover/autodiscover.svc/* for AutoDiscover to work correctly and to/ews/* for the actual requests.

Tokens are signed and so they cannot be modified – i.e. the audienceURI cannot be changed by some man-in-the-middle without invalidating the signing. But as the tokens exist in the clear in the packet header they could be copied and used by someone else against the same endpoint if they have access to them, which is why end to end TLS is key, and why only trusted devices should be able to perform TLS decryption/re-encryption.

So just as with DAuth, if you want to put a device between Exchange on-premises and Exchange Online you have some things to consider. You can do TLS termination if you want to, and if you wanted to verify the signing of the tokens to confirm it came from the Azure Auth Service you could do that too, but there’s not much else you can do to the traffic without breaking it (and you need to be careful to protect it as the token could be re-used but of course only against the original audienceuri, to change that parameter or any of the content would invalidate the digital signature). You can still restrict source IP address ranges at the network layer if you want to, but given that if you manage your servers properly such that only Exchange Online has the public key used for the signing of tokens, you are safe to assume that a properly signed token came from only one place. So, manage the security of the certificates on your Exchange Servers and trust that Exchange won’t do anything with a modified or incorrectly signed token other than reject it.

What about mailbox moves?

Another type of traffic that can take place between Exchange Online and Exchange on-premises is a mailbox move – and that’s the one type of traffic that does not follow the flows described above.

The Mailbox Replication Service (MRS) is used for the migration of mailboxes between on-premises and Exchange Online. When the admin creates the required migration endpoint to enable this feature, he must provide credentials of a user with permission to invoke the MRS moves – those credentials are used in the connection attempt to on-premises, which is TLS secured and uses NTLM Auth. So, you can use pre-auth for that connection to /ews/mrsproxy.svc, and because NTLM is used the credentials never go over the wire.

Hopefully that has cleared up quite a few of the questions we usually get, but just in case that’s all a bit tldr:, here’s the short(er) version:

How do we know the traffic is from Exchange Online? Can it be spoofed?

It can only be spoofed if the certificates used to sign (and in the case of DAuth, encrypt) the traffic are compromised. So, that’s why it’s vital to secure your servers and admin accounts using well documented processes. If your servers or admins are compromised, the doors are wide open to all kinds of things.

Again, to re-iterate, in DAuth the access tokens are encrypted as well as signed, so the token itself can’t be read without the correct private key, but with OAuth it can, but if the signature is valid, then we know where the traffic came from.

Can I scope the traffic so only users from my tenant can use this communication path?

Users from your tenant aren’t using this server to server communication – it’s Exchange Online and Exchange on-premises using it, performing actions on behalf of the users. So, can you scope it to just those servers? We do document the namespaces and IP address ranges these requests will be coming from here, but given what we’ve covered in this article, we now know Exchange can tell if the traffic is authentic or not and won’t do anything with traffic it can’t trust (we put our money where our mouth is on this, imagine how many Exchange Servers we have in Exchange Online, with no source IP scoping possible, so how many connections we handle, every minute of every day – that’s why we have to write and rely on secure code to protect us – and that same code exists in Exchange on-premises, assuming you keep it up to date).

Can I pre-authenticate the traffic? Can I check the tokens validity against some endpoint?

You can’t pre-authenticate the traffic using HTTP headers as you would Outlook or ActiveSync as the auth isn’t done that way. The authentication is provided by proving the authenticity of the request’s signing. If we think about authentication as being about proving who someone is, the digital signing itself proves who is making the request. Only the person in possession of the private key used to sign the traffic can only sign the requests. So we validate, and thereby authenticate the requests received from your on-premises servers coming in to Exchange Online because we know (and trust) only you have the private key used to sign them. Azure Auth Service looks after the private key it uses to sign our requests (very carefully as you might expect). Can you verify the signing? To directly quote this terrific blog post “signature verification key and issuer ID value are often available as part of some advertising mechanism supported by the authority, such as metadata & discovery documents. In practice, that means that you often don’t need to specify both values – as long as your validation software know how to get to the metadata of your authority, it will have access to the key and issuer ID values.” So, you can verify the signing is good, and you could potentially also choose to additionally validate;

  1. That the token is a valid JWT
  2. That the iss claim (in the signed actor token) is correct – this is a well-known GUID @ tenant ID
  3. Checking the actor is Exchange (AppId claim) – this is also a well-known appID value @ tenant ID

Can I use Multi-Factor Auth (MFA) to secure this traffic? My security policy says I must enforce MFA on anything coming in front the Internet

Let’s first agree upon the definition of MFA, as that’s a term people throw around a lot, often incorrectly. MFA is a security mechanism or system that requires the caller provide more than one form of authentication, and they must come from different providers to verify their identity. For example, credentials and a certificate, a certificate and a fingerprint and so on. Another way to describe MFA is with a set of three attributes: something you know, something you have and something you are. Something you know – a password, something you have, a certificate, something you are – a fingerprint.

So, now we know MFA is a general term used to describe how one party authenticates to another, and isn’t an actual ‘thing’ you can configure, let’s look at the hybrid traffic with it in mind.

In both DAuth and OAuth the digital signing addresses the something you have aspect, the signing can only have been done by Azure Auth Service, as it’s the only possessor of the private key used for the signing.

The something you are attribute isn’t something the flow can provide, Azure Auth isn’t a person with fingers or DNA, but the something you know is arguably what Exchange Online puts in the request, the claims in an OAuth token, the key values and attributes within a DAuth token. So one could make a case that this traffic already uses MFA. This might not be the kind of MFA your security guy can buy as an off-the-shelf solution with a token keyfob, but if you get back to what MFA is, not how it compares to a solution for client to server traffic, you’ll have a more meaningful conversation.

Can I SSL terminate the connection and inspect it and then re-encrypt it?

Yes you can terminate the SSL/TLS but ‘inspecting it’ is potentially a can of worms if ‘inspecting’ results in ‘modifying’ it. You can’t inspect a DAuth token without decrypting it, and what exactly are we inspecting it for? To check the issuer, the audience and so on are correct? Ok, let’s do that, but if the signing is still intact then they must be correct. All you need to do is verify the signature matches that from Azure Auth Service. If you can do that, you don’t need to inspect content, as they will be valid. Whatever happens you don’t want to tinker with the headers, or you’ll invalidate the signature, and then Exchange (or more precisely, Windows) will reject it.

Are these connections anonymous? Authenticated? Authorized?

As previously explained, the traffic does not carry authentication headers as such but instead is authenticated using digital signing of the requests, and the authorization is done by the code on the server receiving the request. Bob is asking to see Mary’s free/busy – can he? Yes or no. That’s authorization.

Are any of these connections or requests insecure or untrustworthy?

Microsoft does not consider any of the flows discussed in this article to be insecure at all. We were very diligent when designing and implementing them to make sure we secure the traffic and the tokens using all available means, and we’re only documenting this in detail in this article to clear up any doubts and to try and fully explain why it’s secure and trustworthy to configure Exchange hybrid.

How do we prevent token replay? Token modification?

Token replay is potentially possible with any token based authentication and authorization system – as the token is being used in place of credentials at the time of accessing a resource. DAuth has an advantage in this space as the tokens are encrypted, but the general principle for any authentication scheme like this is to protect any and all tokens from interception and mis-use, and there’s where TLS comes in, and only allowing termination of TLS on devices you trust. And not allowing man-in-the-middle attacks or allowing them to happen by configuring computers or teaching users to ignore certificate warnings.

How do I know if I’m using DAuth or OAuth and can I choose which to use?

Exchange will always try OAuth first by looking to see if there is an enabled IntraOrganizationConnector present with the domain name of the target user for any request. Only if no such connector exists, or if there is one but it is disabled, would we then look for the Domain name in an OrgRelationship. And if there isn’t one of them, we will then start to look for the domain name in the Availability Address Space configuration.

Remember OAuth is only for on-premises <-> Exchange Online users, so you might very well end up with both being used if you are both hybrid with Exchange Online and have partner relationships with other organizations.

Know this though, the HCW will always try to enable OAuth in your org if it can, because we want to try and get our customers to use OAuth if we can for reasons previously explained. If you disable the IntraOrganizationConnector and then re-run HCW, it will get re-enabled if your topology can support it.

Well done for making it this far. I hope you found this useful, if not today then at some point when you are having to explain to some security guy why it’s ok to go hybrid.

Please do provide any comments or ask questions if you want to, and if you want to read more here’s a list of articles I found helpful while putting this together.

References

Particular thanks for helping with this article go to Matthias Leibmann and Timothy Heeney for making sure it was technically accurate, and to numerous others who helped it make sense and mostly correct grammar.

Greg Taylor
Principal PM Manager
Office 365 Customer Experience

TooManyBadItemsPermanentException error when migrating to Exchange Online?

$
0
0

Some of you may have noticed that more migrations might be failing due to encountering ‘too many bad items’. Upon closer review, you may notice that the migration report contains entries referencing corrupted items and being unable to translate principals. I wanted to take a few minutes and provide more information to help understand what this means, why these are now occurring, and what can be done about them. Ready to geek out?

During a mailbox migration, there are several stages we go through. We start off with copying the folder hierarchy (including any views associated with those folders), then perform an initial copy of the data (what we call the Initial Sync). Once the initial data copy process is complete, we then copy rules and security descriptors. Reviewing a move report shows entries similar to these.

Stage: CreatingFolderHierarchy. Percent complete: 10
Initializing folder hierarchy from mailbox <guid>: X folders total
Folder hierarchy initialized for mailbox <guid>: X folders created
Stage: LoadingMessages
Copying messages is complete. Copying rules and security descriptors.

For our discussion today, we are interested in the stage of “Copying rules and security descriptors”. Security descriptors are Access Control Lists (ACLs), which are then comprised of Access Control Entries (ACEs, or the individual permissions entries) and stored in SDDL format. In the context of a mailbox, we include both the Mailbox security descriptor (Mailbox permissions) as well as Folder security descriptors (permissions on individual folders). When we look at the Mailbox Security descriptor, it should be noted that only Explicit mailbox permissions are copied. These would include permissions granted by using the Add-MailboxPermission cmdlet, by using the Exchange Management Console (2010) or Exchange Admin Center (2013 and 2016) to add Full Access rights. Any Inherited permissions are not evaluated during the copy process. For example, granting the Receive-As permission on a database object in Active Directory results in an Inherited Allow for Full Access for all mailboxes on that database. When mailboxes on that database are migrated to Exchange Online, those Inherited permissions will not get copied.

Now that we have briefly covered security descriptors, let’s look at the issue. About midway through 2016, a change was introduced to Exchange Online whereby if a security principal could not be successfully validated/mapped to an Exchange Online object, it would be marked as a bad item. Previously, the behavior was that invalid permissions would simply be ignored, and administrators were then left to wonder why some permissions no longer worked after the migration. With this new behavior, corrupt/invalid permissions are now logged so that administrators will know that there are problems with permissions. From my perspective as a Support Engineer, this is a change for the better because as Administrators, you are now able to see when there are issues with permissions. It is possible that this behavior will continue to evolve over time, but I would advise to become familiar with this new behavior so that you understand what is happening.

Now how does this affect you? Since we are now incrementing the bad item count for each corrupt/invalid permission, this means that if we encounter more corrupt/invalid permissions than your current bad item limit is set to (default is 10 for a migration batch), the migration will fail. Depending on the state of permissions, you could potentially see a LOT of bad entries being logged. If you are looking at the migration report text file (downloadable from the Exchange Online Portal), you may see entries similar to the following:

11/12/2016 8:44:43 AM [EXO MRS Server] A corrupted item was encountered: Unable to translate principals for folder “Folder Name”/”FolderNTSD”: Failed to find a principal from the source forest.
5/19/2016 6:33:50 PM [EXO MRS Server] A corrupted item was encountered: Unable to translate principals to the target mailbox: Failed to find a principal in the target forest that corresponds to the following source forest principal values: Alias: <alias>; DisplayName: <Display Name>; MailboxGuid: <mailbox guid>; SID: <SID of User>; ObjectGuid:
<Object GUID>; LegDN: <legacyExchangeDN>; Proxies: [X500:<legacyExchagneDN format>; SMTP:user@contoso.com;];.
5/19/2016 6:33:50 PM [EXO MRS Server] A corrupted item was encountered: Unable to translate principals to the target mailbox: Failed to find a principal in the target forest that corresponds to the following source forest principal values: SID: <SID of User>; ObjectGuid: <Object GUID>;.

So, what is the logic used to validate permissions?

I’m glad you asked! Here is the process spelled out. There are four basic steps to this process, broken out as follows.

  1. Exchange Online – I need to resolve this SID which is present in the security descriptor (Folder or Mailbox)
  2. Exchange Online – Make a request to the On-Premises MRS Proxy, passing the SID to resolve
  3. On-Premises MRS Proxy – Look up the SID against Active Directory and return a set of attributes (including primary SID and legacyExchangeDN)
  4. Exchange Online – Take the legExchangeDN value provided, and attempt to match it up with a user account in the cloud which has that stamped as an X500 proxy address.

Normally, Directory Synchronization will take care of stamping the legacyExchangeDN from each side as an X500 proxy address, but this does mean that the On-Premises legacyExchangeDN must match a Mail-enabled recipient (i.e. Mailbox, MailUser, Mail-enabled Security Group) in the cloud by an X500 Proxy. If it does not, then resolving that permission entry will fail.

I do want to differentiate between the different types of permissions errors you may see.

SourcePrincipalMappingException – these mean that when MRS Proxy tried to look up the SID against On-Premises Active Directory, it couldn’t be resolved. This is a common scenario when users leave the company and their accounts are deleted. You could also encounter these issues if the SID in question is part of the SIDHistory of an On-Premises account. When MRS Proxy attempts to look up the SID, we only search by ObjectSID or msExchMasterAccountSID. MRS Proxy does not evaluate against SIDHistory, so the SID failing to be resolved would be expected behavior. SIDHistory being populated won’t be a common scenario, but it is nonetheless something to be aware of.

Note: Exchange Online has a special built-in bad item limit of 1000 for these Source Principal Mapping errors, so these moves will not fail unless you encounter more than 1000 of these types of bad items.

TargetPrincipalMappingException – these mean that we can’t map the permission to a user account in the Target forest (Exchange Online). A common scenario here would be if a user or group was given permissions on a mailbox, but that user or group is not in your dirsync scope. After trying to move that mailbox via MRS, that user or group is not going to be present in Exchange Online, so this error would be expected. Another scenario is if a security group (not mail-enabled!) was used to assign permissions. Non mail-enabled security groups are not synchronized to Exchange Online, so they won’t exist in the Target forest.

To resolve this issue, there are really two options.

  1. Increase the bad item limit to account for permissions errors. In complex legacy environments where multiple Exchange versions have been in place, and there has been a lot of user turnover, I’ve seen where permissions errors can number into the thousands. Be prepared that you may need to increase the bad item limit to a number higher than you expect. The good news here is that with improvement to Exchange over the years, the odds of encountering actual bad messages is relatively slim, so odds are good that the vast majority of bad items are bad permissions. The second bit of good news here is that we log the type of bad item that is encountered and make this information available in the move report. I’ll show you how to dig into a move report and look at the bad items later on in this blog post.
  2. Cancel the move, fix the bad permissions from the folder or mailbox by either removing them or fixing the issue causing the user/group to not be resolved in Exchange Online, and then submit the move again. But – you may ask – what if I want to fix the permissions on the current move and then resume it? Well, I’m not going to stop you from fixing bad permissions. But I will tell you that it won’t make any difference for the current move. We only evaluate permissions once, at the end of the initial data copy. If the move fails due to bad items (permissions), even if you fix the bad permissions we won’t re-evaluate the now fixed-up permissions and allow the move to complete successfully. You either have to up the bad item limit, or remove the move and fix the permissions and submit a new move.

Now, I promised earlier that I would go through how to review the permissions errors. You can do this by using PowerShell and saving the move report into a variable where it is stored in memory. I typically have the move report exported out to an XML file because I don’t have direct access to customer tenant information. If you are reviewing failed moves within your own tenant, there is no need to do that if you don’t want. I’ll provide the context to do both just in case you want to know both methods.

To save the move report to a variable, you would run the following from PowerShell connected to Exchange Online.

$movereport = Get-MoveRequestStatistics <move request identity> -IncludeReport

To save the move report to an XML file, then import the XML file into PowerShell, you would run the following from PowerShell connected to Exchange Online.

Get-MoveRequestStatistics <move request identity> -IncludeReport | Export-CliXml c:\temp\movereport.xml

Once the file is saved, then you import it into PowerShell. Note that this PowerShell instance does not have to be connected to Exchange Online. It can be just a regular PowerShell instance.

$movereport = Import-CliXml c:\temp\movereport.xml

If you never dug into a move report, let me just say that there are all sorts of golden nuggets of information buried inside (which won’t show in the text file from the Portal, by the way!)

Now that you have the move report imported as a variable, you can access all the rich information within the report. We specified our variable earlier as $movereport, so we just need to call that variable, and access the information stored inside it.

$movereport.report.baditems – this gives you a list of all the bad items encountered. A cool tip is that you can use the Out-GridView PowerShell function to open another window with the list.

$movereport.report.baditems | Out-GridView

What is nice about the Grid View is that you can then filter the output. For example, to validate that all of your bad items are permissions errors, you can simply choose “Add criteria”, check the “Kind” box, and click “Add”.

image

Change “Contains” to “Does not contain”, and type Security. This will quickly show you if there are any other types of bad items.

image

Now that we have identified the behavior change, and gone over how to address it, let’s end by talking about what approach should be taken for migrating mailboxes.

The recommended approach to this new change in behavior would be to continue to migrate using low bad item counts, and then manually remediate those that fail. We recommend this approach because migrations that fail would indicate either a LOT of bad source permissions (more than 1000), or it indicates there are valid, working permissions On-Premises that are failing to be correctly mapped to objects in Exchange Online. Both of these conditions should not be common, so investigation would be warranted to ensure that you are in fact dealing with bad permissions.

Special thanks to Brad Hughes and the rest of the MRS team for their assistance and review of this content.

Ben Winzenz

Announcing Original Folder Item Recovery

$
0
0

Cumulative Update 6 (CU6) for Exchange Server 2016 will be released soonTM, but before that happens, I wanted to make you aware of a behavior change in item recovery that is shipping in CU6.  Hopefully this information will aid you in your planning, testing, and deployment of CU6.

Item Recovery

Prior to Exchange 2010, we had the Dumpster 1.0, which was essentially a view stored per folder. Items in the dumpster stayed in the folder where they were soft-deleted (shift-delete or delete from Deleted Items) and were stamped with the ptagDeletedOnFlag flag. These items were special-cased in the store to be excluded from normal Outlook views and quotas. This design also meant that when a user wanted to recover the item, it was restored to its original folder.

With Exchange 2010, we moved away from Dumpster 1.0 and replaced it with the Recoverable Items folder. I discussed the details of that architectural shift in the article, Single Item Recovery in Exchange 2010. The Recoverable Items architecture created several benefits: deleted items moved with the mailbox, deleted items were indexable and discoverable, and facilitated both short-term and long-term data preservation scenarios.

As a reminder, the following actions can be performed by a user:

  • A user can perform a soft-delete operation where the item is deleted from an Inbox folder and moved to the Deleted Items folder. The Deleted Items folder can be emptied either manually by the user, or automatically via a Retention Policy. When data is removed from the Deleted Items folder, it is placed in the Recoverable Items\Deletions folder.
  • A user can perform a hard-delete operation where the item is deleted from an Inbox folder and moved to the Recoverable Items\Deletions folder, bypassing the Deleted Items folder entirely.
  • A user can recover items stored in the Recoverable Items\Deletions folder via recovery options in Outlook for Windows and Outlook on the web.
  • However, this architecture has a drawback – items cannot be recovered to their original folders.

Many of you have voiced your concerns around this limitation in the Recoverable Items architecture, through various feedback mechanisms, like at Ignite 2015 in Chicago where we had a panel that included the Mailbox Intelligence team (those who own backup, HA, DR, search, etc.). Due to your overwhelming feedback, I am pleased to announce that beginning with Exchange 2016 CU6, items can be recovered to their original folders!

How does it work?

  1. When an item is deleted (soft-delete or hard-delete) it is stamped with the LastActiveParentEntryID (LAPEID). By using the folder ID, it does not matter if the folder is moved in the mailbox’s hierarchy or renamed.
  2. When the user attempts a recovery action, the LAPEID is used as the move destination endpoint.

The LAPEID stamping mechanism has been in place since Exchange 2016 Cumulative Update 1. This means that as soon as you install CU6, your users can recover items to their original folders!

Soft-Deletion:

ItemRecovery

 

Hard-Deletion

ItemHardRecovery

Are there limitations?

Yes, there are limitations.

First, to use this functionality, the user’s mailbox must be on a Mailbox server that has CU6 installed. The user must also use Outlook on the web to recover to the original folder; neither Outlook for Windows or Outlook for Mac support this functionality, today.

If an item does not have an LAPEID stamped, then the item will be recovered to its folder type origin – Inbox for mail items, Calendar for calendar items, Contacts for contact items, and Tasks for task items. How could an item not have an LAPEID? Well, if the item was deleted before CU1 was installed, it won’t have an LAPEID.

And lastly, this feature does not recover deleted folders. It only recovers items to folders that still exist within the user’s mailbox hierarchy. Once a folder is deleted, recovery will be to the folder type origin for that item.

Summary

We hope you can take advantage of this long sought-after feature. We continue to look at ways we can improve user recovery actions and minimize the need for third-party backup solutions. If you have questions, please let us know.

Ross Smith IV
Principal Program Manager
Office 365 Customer Experience

.NET Framework 4.7 and Exchange Server

$
0
0

We wanted to post a quick note to call out that our friends in .NET are releasing .NET Framework 4.7 to Windows Update for client and server operating systems it supports.

At this time, .NET Framework 4.7 is not supported by Exchange Server. Please resist installing it on any of your systems after its release to Windows Update.

We will be sure to release additional information and update the Exchange supportability matrix when .NET Framework 4.7 becomes a supported version of .NET Framework with Exchange Server. We are working with the .NET team to ensure that Exchange customers have a smooth transition to .NET Framework 4.7, but in the meantime, delay this particular .NET update on your Exchange servers. Information on how this block can be accomplished can be found in article 4024204, How to temporarily block the installation of the .NET Framework 4.7.

It’s too late, I installed it. What do I do now?

If .NET Framework 4.7 was already installed and you need to roll back to .NET Framework 4.6.2, here are the steps:

Note: These instructions assume you are running the latest Exchange 2016 Cumulative Update or the latest Exchange 2013 Cumulative Update as well as .NET Framework 4.6.2 prior to the upgrade to .NET Framework 4.7 at the time this article was drafted. If you were running a version of .NET Framework other than 4.6.2 or an older version of Exchange prior to the upgrade of .NET Framework 4.7, then please refer to the Exchange Supportability Matrix to validate what version of .NET Framework you need to roll back to and update the steps below accordingly. This may mean using different offline/web installers or looking for different names in Windows Update based on the version of .NET Framework you are attempting to roll back to if it is something other than .NET Framework 4.6.2.

1. If the server has already updated to .NET Framework 4.7 and has not rebooted yet, then reboot now to allow the installation to complete.

2. Stop all running services related to Exchange.  You can run the following cmdlet from Exchange Management Shell to accomplish this: 

(Test-ServiceHealth).ServicesRunning | %{Stop-Service $_ -Force}

3. Depending on your operating system you may be looking for slightly different package names to uninstall .NET Framework 4.7.  Uninstall the appropriate update.  Reboot when prompted.

  • On Windows 7 SP1 / Windows Server 2008 R2 SP1, you will see the Microsoft .NET Framework 4.7 as an installed product under Programs and Features in Control Panel.
  • On Windows Server 2012 you can find this as Update for Microsoft Windows (KB3186505) under Installed Updates in Control Panel.
  • On Windows 8.1 / Windows Server 2012 R2 you can find this as Update for Microsoft Windows (KB3186539) under Installed Updates in Control Panel.
  • On Windows 10 Anniversary Update and Windows Server 2016 you can find this as Update for Microsoft Windows (KB3186568) under Installed Updates in Control Panel.

4. After rebooting check the version of the .NET Framework and verify that it is again showing version 4.6.2.  You may use this method to determine what version of .NET Framework is installed on a machine. If it shows a version prior to 4.6.2 go to Windows Update, check for updates, and install .NET Framework 4.6.2.  If .NET Framework 4.6.2 is no longer being offered via Windows Update, then you may need to use the Offline Installer or the Web Installer. Reboot when prompted.  If the machine does show .NET Framework 4.6.2 proceed to step 5.

5. After confirming .NET Framework 4.6.2 is again installed, stop Exchange services using the command from step 2.  Then, run a repair of .NET 4.6.2 by downloading the offline installer, running setup, and choosing the repair option.  Reboot when setup is complete.

6. Apply any security updates specifically for .NET 4.6.2 by going to Windows update, checking for updates, and installing any security updates found.  Reboot after installation.

7. After reboot verify that the .NET Framework version is 4.6.2 and that all security updates are installed.

8. Follow the steps here to block future automatic installations of .NET Framework 4.7.

The Exchange Team


Released: June 2017 Quarterly Exchange Updates

$
0
0

The latest set of Cumulative Updates for Exchange Server 2016 and Exchange Server 2013 are now available on the download center. These releases include fixes to customer reported issues, all previously reported security/quality issues and updated functionality.

Updated functionality in Cumulative Update 6

With Cumulative Update 6 we are adding two highly anticipated features; Sent Items Behavior Control and Original Folder Item Recovery. These features are targeted to Exchange Server 2016 only and will not be included in Exchange Server 2013. Exchange Server 2013 already has its own implementation of Sent Items Behavior Control which is different than the version we are releasing today. The Cumulative Update 6 behavior is more closely aligned with how this worked in Exchange Server 2010. Due to architectural differences, the configuration of this feature is not retained if mailboxes are moved between Exchange Server 2010 and Exchange Server 2016 or between Exchange Server 2013 and Exchange Server 2016.

Latest time zone updates

All of the packages released today include support for time zone updates published by Microsoft through May 2017.

TLS 1.2 Exchange Support Update

We previously announced that Cumulative Update 6 would include support for TLS 1.2. The updates released today do have improved support for TLS 1.2 but we are not encouraging customers to move to a TLS 1.2 only environment at this time. We are working with the Windows and .Net teams to make configuring TLS 1.2 a more streamlined experience. Customers should continue to watch this space and be prepared to deprecate TLS 1.0 and 1.1 in the near future.

.Net Framework 4.7 compatibility with these releases

The Exchange team is still completing validation of the June releases with .Net Framework 4.7. We have not found any compatibility issues at this time, but are asking customers to delay using .Net Framework 4.7 until we have completed our validation. Once this validation is complete we will provide further guidance on .Net Framework 4.7 and Exchange Server.

Release Details

KB articles that describe the fixes in each release are available as follows:

Exchange Server 2016 Cumulative Update 6 does include new updates to Active Directory Schema. If upgrading from an older Exchange version or installing a new server, Active Directory updates may still be required. These updates will apply automatically during setup if the logged on user has the required permissions. If the Exchange Administrator lacks permissions to update Active Directory Schema, a Schema Admin must execute SETUP /PrepareSchema prior to the first Exchange Server installation or upgrade. The Exchange Administrator should execute SETUP /PrepareAD to ensure RBAC roles are current.

Exchange Server 2013 Cumulative Update 17 does not include updates to Active Directory, but may add additional RBAC definitions to your existing configuration. PrepareAD should be executed prior to upgrading any servers to Cumulative Update 16. PrepareAD will run automatically during the first server upgrade if Exchange Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU17, 2016 CU6) or the prior (e.g., 2013 CU16, 2016 CU5) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What’s New in Exchange Server 2016 and Exchange Server 2016 Release Notes. You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post is published.

Post release update concerning Cumulative Update 5

Several customers have reported problems with 3rd party solutions which provide brick level backup or single mailbox recovery as a reported feature after installing Cumulative Update 5. Cumulative Update 5 included an update to our database schema which caused some of these products to not function as they had previously. That change carries forward into Cumulative Update 6 as well. The practice of updating the database schema has long been in place with Exchange Server. Microsoft has urged developers to not consider the schema to be immutable nor to program against it. The schema is not publicly defined and is a structure internal to the operation of Exchange Server. Access to store level objects is provided through publicly documented interfaces and structures only.

The Exchange Team

Discontinuation of support for Session Border Controllers in Exchange Online Unified Messaging

$
0
0

In July 2018, we will no longer support the use of Session Border Controllers (SBC) to connect 3rd Party PBX systems to Exchange Online Unified Messaging (UM). We’re making this change to provide a higher quality of service for voicemail, using standard Exchange and Skype for Business protocols. Customers considering a new deployment of this scenario should be aware that they will have a little less than a year to complete one of the migrations below. Customers with existing deployments remain fully supported until July 2018, including moving voicemail-enabled mailboxes from Exchange on-premises and voicemail-enabling new mailboxes.

The following configurations are not affected by this change:

  • Skype for Business Server (on-premises) connected to Exchange Online UM
  • 3rd party voicemail solutions that deposit voicemail messages into Exchange Online mailboxes through APIs, rather than an SBC connection
  • All forms of Exchange Server UM (on-premises)

There are several alternative solutions for impacted customers, one or more of which must be implemented prior to July 2018.

  • Option 1: Complete migration from 3rd party on-premises PBX to Office 365 Cloud PBX.
  • Option 2: Complete migration from 3rd party on-premises PBX to Skype for Business Server Enterprise Voice on-premises.
  • Option 3: For customers with a mixed deployment of 3rd party PBX and Skype for Business, connect the PBX to Skype for Business Server using a connector from a Microsoft partner, and continue using Exchange Online UM through that connector. For example, TE-SYSTEMS’ anynode UM connector can be used for that purpose.
  • Option 4: For customers with no Skype for Business Server deployment or for whom the solutions above are not appropriate, implement a 3rd party voicemail system.

Although only a small number of customers are affected by this change, we know that planning for changes to voice platforms requires time to evaluate options, and to implement the selected option. We encourage you to start this process soon. For more information, please visit the following pages:

You can also ask questions regarding these changes on the Office 365 Tech Community.

Exchange Team

Modern public folders logging and when to use it

$
0
0

Hello again! In our last article, we discussed recommendations for deployment of public folders and public folder mailboxes. In this post, we will be discussing methods and tips for monitoring connections being made to the Public Folder mailboxes with the help of different log types available in Exchange Server 2013 and Exchange Server 2016. This article mainly focuses on logging related to public folder mailbox activity and provides information on how to analyze these logs to get the information on the usage of public folders. Let’s get to it!

How do I log and report on different public folder connections?

As we discussed in previous post, the ability to estimate the number of connections being made to public folder mailboxes is very helpful as deployment guidance for public folders partially revolves around connection counts. As of today, currently available logging methods will not reveal individual names of public folders clients are connecting to but will contain information about public folder mailboxes being accessed by clients.

Depending on what information you are looking to gather there are several flavors of logging you can consider.

  • Autodiscover logs – use these to learn which public folder mailboxes Outlook clients get sent to during the Autodiscover process.
  • Outlook Web App logs – use these to learn which default public folder mailboxes Outlook Web App clients get sent to during connection process. As stated in our first article, the default public folder mailboxes could be either the ones which are provided randomly to the requesting OWA client or could be a hard coded default public folder mailbox assigned to a specific user’s mailbox.
  • RPC Client Access logs & MAPI Client Access on Microsoft Exchange 2013 Mailbox Servers – use these to find out which public folder mailboxes on a specific mailbox server the users are connecting using RPC/HTTP and MAPI/HTTP protocols. These logs can be used with Microsoft Exchange 2013.
  • MAPI/HTTP logs in Microsoft Exchange 2016 Servers – learn which public folder mailboxes your MAPI/HTTP clients are connecting to. These logs should only be used with Microsoft Exchange 2016.

Let’s get started! In the upcoming section, we are going to make extensive use of Log Parser Studio (LPS) tool which will be used to parse the logs to help get the required data. It is a great tool and if you are not aware of it, I would recommend you to first visit the following links and get yourself familiarized with it first:

Autodiscover logs: Which public folder mailboxes are Outlook clients connecting to?

Why do Autodiscover logs need to be investigated?

The Autodiscover service is responsible for informing Outlook clients where and how to connect to a public folder mailbox. This may be so Outlook can display the public folder hierarchy tree, or to make a public logon connection to access content within a public folder mailbox.

Thus, the Autodiscover logs can be useful to administrators in determining which public folder mailboxes are being returned by the Autodiscover service. This information can be very helpful in large multi-site environments when trying to identify possible improvements in public folder mailbox or public folder locations.

To understand this better let’s consider a common scenario that an administrator might face in the environment. An administrator may need to determine which public folder mailboxes are being returned to the end users when they connect from different sites using Outlook. This can be a challenging task if there are many sites and users resulting in a huge data set. Rather than try to analyze the data manually there needs to be an automated way which can get the desired outcome.

This is where the Log Parser Studio (LPS) queries can be used to parse the Autodiscover logs on mailbox servers to get us the required data for further investigation and actions.

Where are Autodiscover logs located?

Autodiscover logs should be investigated on Mailbox servers and can be found in the following default path for Microsoft Exchange 2013/2016:

  • C:\Program Files\Microsoft\Exchange Server\V15\Logging\Autodiscover

(The location may change if the installation path is different from the default.)

Autodiscover Method 1, server-side.

At this point it is assumed Log Parser Studio has been installed.

1. Open the Log Parser Studio by double clicking the LPS.exe application file as shown in the below image which will launch the LPS.

image

2. Once the LPS launches, at the top of the left corner, select File and then click on New Query which will open new tab for query

3. Copy the sample query mentioned in the example below to the query section and set the Log Type to EELXLOG

/* New Query */
SELECT Count(*) As Hits,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'Caller='), 0, ';') as User-Name,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'ResolveMethod='), 0, ';') as Method,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'ExchangePrincipal='), 0, ';') as PF-MBX,
EXTRACT_PREFIX(EXTRACT_SUFFIX(GenericInfo, 0, 'epSite='), 0, ';') as Site-Name
FROM '[LOGFILEPATH]'
WHERE Method LIKE '%FoundBySMTP%'
GROUP BY User-name, Method, PF-MBX, Site-Name
/* End Query */

4. Lock the query to avoid any modifications by clicking on the Lock icon once as shown below

image

5. Click on the Log file manager button available at the top panel window of LPS to add required logs as shown in the below image.

image

6. Specify the log location of the required log files and select one file in the folder where the logs reside and click Open and hit OK

7. In this example, I have accessed and selected logs from a specific mailbox server by specifying the UNC path of the server and log location. It is possible to add multiple folders of same log type from different servers and parse all of them at same time.

image

8. The only thing left is to execute the query and to do so just click the execute query button. in the LPS panel. The output will be similar format to the one shown

image

Note: This LPS query will provide a report that includes information on what users are connecting to what public folder mailboxes along with the Active Directory site the mailbox resides in:

Why might this type of report be useful?

The output of this data may help an administrator determine if a significant number of users in a geographic location would benefit from a public folder mailbox to be located closer to them. Depending on the results administrator can make decision to deploy additional Hierarchy Only Secondary Public Folder Mailbox (HOSPFM) in those geographic sites and then set the DefaultPublicFolderMailbox property on the mailboxes so that they contact the PF Mailbox (HOSPFM) in their own site for fetching the public folder hierarchy information and in turn the user experience while accessing public folders will be better!

One more point to be noted is that only Microsoft Exchange 2016 Autodiscover logs will show the Site Name. This logging feature functionality is not present in Microsoft Exchange 2013 and will require additional manual work to figure out the site location of the mailbox.

Note: The example query will return additional Autodiscover log entries for non-public folder mailbox queries. If you have standardized naming convention for your public folder mailboxes you could enhance the query to only return results where the ExchangePrincipal value contains a portion of your naming convention.

Autodiscover Method 2, client-side.

You can also use the Test E-mail AutoConfiguration tool from within the Outlook client to perform a single user test. This will provide you with which public folder mailbox is being returned to a single end user by Autodiscover service for hierarchy connections.

To start the Test E-mail AutoConfiguration tool, follow these steps:

  1. Start Outlook.
  2. Hold down the Ctrl key, right-click the Outlook icon in the notification area, and then click Test Email AutoConfiguration.
  3. Verify that the correct email address is in the E-mail Address box. You do not need to provide a password if you are running a test for the currently logged in user. If you are testing a different user account than the one currently logged into the machine, then you will need to provide both the email address and password for that account.
  4. In the Test Email AutoConfiguration window, click to clear the Use Guessmart check box and the Secure Guessmart Authentication check box.
  5. Click to select the Use Autodiscover check box, and then click Test.

Below is the excerpt from the XML File gathered from Test E-mail AutoConfiguration:

image

As you can see above the user administrator@contoso.com will be using the public folder mailbox HOSPFM-001@contoso.com to make a hierarchy connection.

Please note this is only an example; if you follow our guidance you will not have any users making connections to your primary public folder mailbox for hierarchy or content.

Outlook Web App logging: which default public folder mailboxes do Outlook Web App clients get sent to?

When users log into Outlook on the Web (OWA) in an environment with public folders, the public folder mailbox used for hierarchy information could be a static default public folder mailbox (if one has been set manually on the mailbox), or a random public folder mailbox. It should be noted Autodiscover is not utilized when accessing public folders using OWA. Instead, OWA uses its own function to return a default public folder mailbox to the requesting user. As such, you will not find OWA users in the previously mentioned Autodiscover logs.

Location of OWA logs

All logging data for Outlook on the Web (OWA) including public folder access will be in the following folder on Exchange 203 Client Access Servers or Exchange 2016 Mailbox Server:

  • C:\Program Files\Microsoft\Exchange Server\V15\Logging\HttpProxy\Owa

Here is an example of a Log Parser Studio query to fetch data from OWA logs:

/* New Query */
SELECT COUNT(*) as hits,
AnchorMailbox AS PF-MBX,AuthenticatedUser,ProtocolAction,TargetServer,HttpStatus,BackEndStatus,Method,ProxyAction
FROM '[LOGFILEPATH]'
WHERE PF-MBX LIKE '%smtp%'
GROUP BY PF-MBX,AuthenticatedUser,ProtocolAction,TargetServer,HttpStatus,BackEndStatus,Method,ProxyAction
ORDER BY hits ASC

Log type is set to EELXLOG

Fields used in Query
Field Description
AnchorMailbox The default public folder mailbox being returned to the user
AuthenticatedUser Users accessing the PF mailbox
ProtocolAction Action being taken by the user while accessing public folder such as GetFolder, Getitem, Createitem, Finditem
TargetServer Provides information on which Exchange Server the query is being redirected to fetch the public folder mailbox
HttpStatus & BackEndStatus Provides information on connection status for the public folder mailbox connection

Output is as follows:

In the output below the AnchorMailbox value is the public folder mailbox the end user is accessing for their hierarchy connection.

image

In the above sample result, the user “Administrator” is logged into OWA and is accessing public folder mailbox HOSPFM-001 which is returned as default public folder mailbox. We know Administrator is using this public folder mailbox for a hierarchy connection as OWA logging currently does not capture information for public folder content access.

In Log Parser Studio, you can save this query and execute it in batches to get concurrent logging. You can also add entire folder instead of individual logs which will make it easier to parse existing and newly written logs. The number of hits returned and being logged against specific public folder mailbox by the user will reveal the public folder mailboxes which are most often being used for fetching hierarchy information.

How can this logging be useful?

Since OWA does not use Autodiscover to fetch a default public folder mailbox, it may make sense to identify the public folder mailboxes being returned to users when they use OWA. Like our earlier example for Outlook, it may identify cases were OWA is using public folder mailboxes that are a less optimal performance choice. Keep in mind that for OWA a better performing hierarchy mailbox is one closer to the Exchange mailbox server where OWA is being rendered rather than one closer to where the user’s Outlook client sits. Depending on your Exchange deployment and where OWA is served this may mean making choices about your public folder mailbox deployment based on what client is more often used in your environment to provide that client more optimal experience.

As mentioned in my earlier post the recommendation for users in geographically disperse sites is to deploy additional Hierarchy Only Secondary Public Folder Mailbox (HOSPFM) and set the DefaultPublicFolderMailbox property on the user mailboxes in those sites to ensure a public folder mailbox within the Site is being used by the respective users for hierarchy.

RPC Client Access logs & MAPI Client Access logs on Mailbox Servers (Microsoft Exchange 2013)

While AutoDiscover logs can provide information about public folder mailboxes Outlook is learning about and may potentially connect to, the RPC Client Access (RPC/HTTP) & MAPI Client Access (MAPI/HTTP) logs will provide information about actual public folder mailbox connections established by users.

Both log types in this case can be combined in LPS in single query and parsed to get some useful information on Public folder mailboxes being accessed.

Default location of logs:

  • MAPI Client Access: C:\Program Files\Microsoft\Exchange Server\V15\Logging\MAPI Client Access
  • RPC Client Access: C:\Program Files\Microsoft\Exchange Server\V15\Logging\RPC Client Access
Which public folder mailboxes on a specific server are users connecting to?

Consider the scenario consisting of a multi-site environment where the administrator is given a task to determine which users are connecting to public folder mailboxes on a specific server. Let’s say E15-CLASS-MB1 is the Mailbox server hosting the public folder mailboxes and the administrator needs to find who is making connections to them. Depending on the results decisions can be made whether or not it makes sense to move certain public folder mailboxes closer to a certain user location based on who actually uses that public folder mailbox. Below are the steps to be followed:

1. Open the LPS on the machine. Copy and paste the query below in the New Query Window in LPS as per the instructions mentioned earlier in the post.

/* Public Folder Mailboxes Hits */
SELECT Count(*) as Hits,
operation as Operation,
user-email as [SMTP Address],
EXTRACT_PREFIX(EXTRACT_SUFFIX(operation-specific, 0, 'Logon:'), 0, ';') as MailBox-LegacyExchangeDN,
EXTRACT_PREFIX(EXTRACT_SUFFIX(operation-specific, 0, 'on '), 0, ';') as Server
INTO '[OUTFILEPATH]\GeoReport.CSV'
FROM '[LOGFILEPATH]'
WHERE operation-specific LIKE '%Logon: Public%' AND Server LIKE '%E15-CLASS-MB1%'
GROUP BY Operation, Mailbox-LegacyExchangeDN, Server, [SMTP Address]
ORDER BY hits DESC

Fields used in query:

Field Description
Operation Used to extract the logons for public folder mailboxes
SMTP Address Email address of the users accessing the public folder mailbox
Mailbox LegacyExchangeDN Public folder mailboxes in form of LegacyExchangeDN
Server Connection requests coming to the server

2. Set the Log File type to EELLOG. Add the required folders to parse from respective mailbox servers and start the query by clicking Query button in LPS Panel.

3. The above sample query exports the results in CSV format. If there is no specific location specified in the query to export the report the default export directory will be used.

4. Once the query has finished executing, it will export the output to a CSV file, which can be further formatted as table.

5. To do so Open the CSV file. By default, the CSV file will not have any formatting and will show the output in similar format.

image

6. Select all the cells which contains the data and then select Insert tab and click on Table which will open a new pop-up window to Create Table. Click on OK button

image

7. A new table will be created in structured format to help sort the data and filter it.

image

8. The filtering can be used to sort the data by available fields such as SMTP Address, Mailbox-LegacyExchangeDN

If the LegacyExchangeDN output is trimmed and you cannot figure out the full public folder mailbox name, then you can copy the LegacyExchangeDN value of the public folder mailbox in Exchange PowerShell and use it to find the name of relevant mailbox as shown below:

image

You now have information regarding public folder mailboxes being actively used by users on the server. Not only which ones, but also the frequency. This can be utilized by the administrator to make public folder deployment decisions.

MAPI/HTTP Logs (Exchange 2016 Only)

In Microsoft Exchange 2016 there is one additional folder created specially to log MAPI/HTTP protocol traffic. Recent updates to Exchange 2016 have removed MAPI/HTTP traffic from the MAPI Client Access log. If not all of your Outlook for Windows clients are connecting to Exchange 2016 via MAPI/HTTP you may need to analyze both logs to get a full picture of your public folder mailbox connections until such time that all Outlook for Windows clients are using MAPI/HTTP. All MAPI/HTTP logging is now logged in to the MapiHttp folder.

The logs reside in the following default path:

  • C:\Program Files\Microsoft\Exchange Server\V15\Logging\MapiHttp\Mailbox

Exchange Server 2016 uses slightly different field names for MAPI/HTTP logging, and a query used previously with Exchange Server 2013 for parsing the MAPI/HTTP traffic in the older MAPI Client access logs will no longer work with Exchange Server 2016.

Which public folder mailboxes are your MAPI/HTTP clients connecting to?

MAPI/HTTP logs can be investigated for connections established to public folder mailboxes over the MAPI/HTTP protocol in Exchange Server 2016 using the below query in Log Parser.

Ensure the Log Type is set to EELXLOG

/* New Query */
SELECT Count(*) as Hits,MailboxId AS PF-Mailbox, MDBGuid AS Database, ActAsUserEmail AS SMTP-Address, SourceCafeServer FROM '[LOGFILEPATH]'
WHERE OperationSpecific LIKE '%PublicLogon%'
GROUP BY PF-Mailbox,Database,SMTP-Address, SourceCafeServer
ORDER BY Hits DESC

Fields used in this query:
Field Description
Operation-Specific Used to extract the logons for public folder mailboxes
SMTP Address Email address of the users accessing the public folder mailbox
PF-mailbox Mailbox Guid of PF mailbox
SourceCafeServer Connection request coming to the server
Database Shows which specific mailbox database which host public folder mailboxes is being connected to

Once the query is executed it will gather the information and will populate the results in the below format which can be exported to CSV and output can be gathered in batches by running the query in batches to fetch more data.

Sample output:

image

In Exchange 2016 MAPI/HTTP logs, the name of the public folder mailbox is not revealed but, the log does capture the mailbox GUID of the public folder mailbox which can be used in PowerShell command to fetch the actual public folder mailbox name.

Note: If there are any users hosted in Exchange 2016 who still use the RPC/HTTP protocol, the RPC/HTTPS query previously shown can be used to fetch the data for these specific users.

How this data can be useful to administrators?

The administrators can run this report repeatedly in batches and gather the data in CSV file. The data can be collated for the results from different batches and investigated for public folder mailboxes being accessed frequently by the users. From there administrators should be able to find if there are any public folder mailboxes being used heavily by the users and then make decision to move any specific public folder mailboxes or maybe even specific public folders closer to users in specific location.

There are so many log types. When should I use what?

It is true there are many different logs in Exchange Sever showing similar information. Depending on what protocol your users use you may make decisions on the log type to parse. Autodiscover logs will give a combined view of what public folder mailboxes users are at least trying to access once. If you have content-only public folder mailboxes in your environment that are excluded from serving hierarchy and not directly assigned to users as their default, you may be able to determine if some are never accessed and may contain content worthy of purging. If you need a more granular view of the world, and the ability to generate some sort of heat map you may choose to go with more protocol specific logs. These logs will provide data on each time the client creates a new connection to a public folder mailbox and allow you to determine more than just if the client learned about it through Autodiscover but if it is being used far more heavily by many users over time. The options are varied and up to you to choose based on your need.

Summary

In this post, I have discussed and provided information on different types of public folder logging and how this logging can be useful to administrators to identity heavily used public folder mailboxes, which in turn can be used to do planning and deployment of public folders in the environment. In upcoming posts, we will discuss topics related to public folder management and quota related information

I would like to thank Brian Day, Ross Smith IV & Nasir Ali for their inputs while reviewing this content and validating the guidance mentioned in the blog post, Special thanks to Kary Wall for providing inputs with Log parser studio queries and Nino Bilic for helping to get this blog post ready!

Siddhesh Dalvi
Support Escalation Engineer

Announcing availability of 250,000 public folder Exchange 2010 hierarchy migrations to Exchange Online

$
0
0

Last September, we announced a beta program to validate onboarding of public folder data from Exchange 2010 on-premises to Exchange Online with large public folder hierarchies (100K – 250K public folders).

We are glad to announce that Exchange Online now officially supports public folder hierarchies of up to 250K public folders in the cloud – more than double the previously supported limit of 100K public folders!

In line with our efforts to help larger customers onboard to Exchange Online, we would like to additionally announce support for the migration of public folders from on-premises Exchange 2010 to Exchange Online, for customers with folder hierarchies up to 250K.

What does all this mean?

  • All existing customers using Exchange Online who would have been constrained by the limit of 100K public folders, can now expand their Exchange Online public folder hierarchy up to 250K folders.
  • Any on-premises customers running Exchange 2010 with up to 250K public folders, who would like to onboard to Exchange Online, can now do so.

Note: At this point in time, Exchange 2013/2016 customers with over 100K folders can still only migrate up to 100K public folders to Exchange Online. However, once they have migrated to Exchange Online, they can expand their hierarchy up to 250K public folders. We are working to resolve this limitation for our Exchange 2013/2016 customers in the future.

Keep checking this blog for further updates on the subject.

Public folder team

Released: September 2017 Quarterly Exchange Updates

$
0
0

The latest set of Cumulative Updates for Exchange Server 2016 and Exchange Server 2013 are now available on the download center.  These releases include fixes to customer reported issues, all previously reported security/quality issues and updated functionality.

Minimum supported Forest Functional Level is now 2008R2

In our blog post, Active Directory Forest Functional Levels for Exchange Server 2016, we informed customers that Exchange Server 2016 would enforce a minimum 2008R2 Forest Functional Level requirement for Active Directory.  Cumulative Update 7 for Exchange Server 2016 will now enforce this requirement.  This change will require all domain controllers in a forest where Exchange is installed to be running Windows Server 2008R2 or higher.  Active Directory support for Exchange Server 2013 remains unchanged at this time.

Support for latest .NET Framework

The .NET team is preparing to release a new update to the framework, .NET Framework 4.7.1.  The Exchange Team will include support for .NET Framework 4.7.1 in our December Quarterly updates for Exchange Server 2013 and 2016, at which point it will be optional.  .NET Framework 4.7.1 will be required on Exchange Server 2013 and 2016 installations starting with our June 2018 quarterly releases.  Customers should plan to upgrade to .NET Framework 4.7.1 between the December 2017 and June 2018 quarterly releases.

The Exchange team has decided to skip supporting .NET 4.7.0 with Exchange Server.  We have done this not because of problems with the 4.7.0 version of the Framework, rather as an optimization to encourage adoption of the latest version.

Known unresolved issues in these releases

The following known issues exist in these releases and will be resolved in a future update:

  • Online Archive Folders created in O365 will not appear in the Outlook on the Web UI
  • Information protected e-Mails may show hyperlinks which are not fully translated to a supported, local language

Release Details

KB articles that describe the fixes in each release are available as follows:

Exchange Server 2016 Cumulative Update 7 does not include new updates to Active Directory Schema.  If upgrading from an older Exchange version or installing a new server, Active Directory updates may still be required.  These updates will apply automatically during setup if the logged on user has the required permissions.  If the Exchange Administrator lacks permissions to update Active Directory Schema, a Schema Admin must execute SETUP /PrepareSchema prior to the first Exchange Server installation or upgrade.  The Exchange Administrator should execute SETUP /PrepareAD to ensure RBAC roles are current.

Exchange Server 2013 Cumulative Update 18 does not include updates to Active Directory, but may add additional RBAC definitions to your existing configuration. PrepareAD should be executed prior to upgrading any servers to Cumulative Update 18. PrepareAD will run automatically during the first server upgrade if Exchange Setup detects this is required and the logged on user has sufficient permission.

Additional Information

Microsoft recommends all customers test the deployment of any update in their lab environment to determine the proper installation process for your production environment. For information on extending the schema and configuring Active Directory, please review the appropriate TechNet documentation.

Also, to prevent installation issues you should ensure that the Windows PowerShell Script Execution Policy is set to “Unrestricted” on the server being upgraded or installed. To verify the policy settings, run the Get-ExecutionPolicy cmdlet from PowerShell on the machine being upgraded. If the policies are NOT set to Unrestricted you should use the resolution steps in KB981474 to adjust the settings.

Reminder: Customers in hybrid deployments where Exchange is deployed on-premises and in the cloud, or who are using Exchange Online Archiving (EOA) with their on-premises Exchange deployment are required to deploy the most current (e.g., 2013 CU18, 2016 CU7) or the prior (e.g., 2013 CU17, 2016 CU6) Cumulative Update release.

For the latest information on Exchange Server and product announcements please see What's New in Exchange Server 2016 and Exchange Server 2016 Release Notes.  You can also find updated information on Exchange Server 2013 in What’s New in Exchange Server 2013, Release Notes and product documentation available on TechNet.

Note: Documentation may not be fully available at the time this post is published.

The Exchange Team

Viewing all 607 articles
Browse latest View live




Latest Images