Quantcast
Channel: You Had Me At EHLO…
Viewing all 607 articles
Browse latest View live

A script to troubleshoot issues with Exchange ActiveSync

0
0

The Exchange support team relatively frequently receives cases where mobile devices using Exchange ActiveSync (EAS) protocol send too many requests to Exchange server resulting in a situation where server runs out of resources, effectively causing a ‘denial of service’ (DOS) attack. The worst outcome of such a situation is that the server also becomes unavailable to other users who may not be using EAS protocol to connect. We have documented this issue with possible mitigations in the following KnowledgeBase article:

2469722 Unable to connect using Exchange ActiveSync due to Exchange resource consumption

A recent example of this issue was Apple iOS 4.0 devices retrying a full sync every 30 seconds (see TS3398). Another example could be some devices that do not understand how to handle a ‘mailbox full’ response from the Exchange server, resulting in several tries to reconnect. This can cause such devices to attempt to connect & sync with the mailbox more than 60 times in a minute, killing battery life on the device and causing performance issues on server.

Managing mobile devices & balancing available server resources among different types of clients can be a daunting challenge for IT administrators. Trying to track down which devices are causing resource depletion issues on Exchange 2010/2007 Client Access server (CAS) or Exchange 2003 Front-end (FE) server is not an easy task. As referenced in the article above, you can use Log Parser to extract useful statistics from IIS logs (see note below), but most administrators do not have the time & expertise to draft queries to extract such information from lengthy logs.

The purpose of this post is to introduce everyone in Exchange community to a new PowerShell script that can be utilized to identify devices causing resource depletion issue, help in spotting performance trends and automatically generate reports for continuous monitoring. Using this script you can easily & quickly drill into your users' EAS activity, which can be a major task when faced with IIS logs that can get up to several gigabytes in size. The script makes it easier to identify users with multiple EAS devices. You can use it as a tool to establish a baseline during periods of normal EAS activity and then use that for comparison and reporting when things sway in other directions. It also provides an auto-monitoring feature which you can use to receive e-mail notifications.

Note: The script works with IIS logs on Exchange 2010, Exchange 2007 and Exchange 2003 servers.
All communication between mobile devices using EAS protocol and Microsoft Exchange is logged in IIS Logs on CAS/FE servers in W3C format. The default W3C fields enabled for logging do vary between IIS 6.0 and 7.0/7.5 (IIS 7.0 has the same fields as 7.5). This script works against both versions.

IIS Logs

Because EAS uses HTTP, all EAS requests are logged in IIS logs, which is enabled by default. Sometimes administrators may disable IIS logging to save space on servers. You must check whether logging is enabled or not and find the location of log files by following these steps:

IIS 7

  1. In IIS Manager, expand the server name i.e. ExchangeServer (Contoso\Administrator)
  2. In the Features View, double click Logging in the IIS section.

IIS 6

  1. In IIS Manager, right click the web site name (for most it should be Default Web Site) and choose Properties
  2. Click on the Web Site tab.

What are mobile devices responsible for in communications with the server?

Before we delve into the specifics of the script, let's review some important requirements for mobile devices that use EAS to communicate with Microsoft Exchange.

  • When a mobile device is returned an unexpected response from server, it's up to the device to handle the response and retry appropriately at a reasonable interval. Additionally, devices are responsible for handling timeouts that happen outside of IIS, which may be caused by network latency.
  • With each request a device sends to IIS/Exchange, it should also report the User-Agent.

What will you see when you use this script?

The script utilizes Microsoft Log Parser 2.2 to parse IIS logs and generate results. It creates different SQL queries for Log Parser based on the switches (see table below) you use. A previous blog post Exchange 2003 - Active Sync reporting talking about Log Parser that touches on similar points. The information in that post still applies to Exchange 2010 & 2007. Since that blog post, additional commands were added to EAS protocol), which are also utilized by this new script while processing the logs.

Here's a list of the EAS commands that the script will report in results:

Sync, SendMail, SmartForward, SmartReply, GetAttachment, GetHierarchy, CreateCollection, DeleteCollection, MoveCollection, FolderSync, FolderCreate, FolderDelete, FolderUpdate, MoveItems, GetItemEstimate, MeetingResponse, Search, Settings, Ping, ItemOperations, Provision, ResolveRecipients, ValidateCert

For more details about each EAS command, see ActiveSync HTTP Protocol Specification on MSDN.

In addition to these commands, the following parameters are also logged by the script.

  1. User
  2. User Name
  3. Device Type
  4. Device ID
  5. User-Agent
  6. sc-bytes: This is only available if you have enabled this tag in IIS logging.
  7. cs-bytes:This is only available if you have enabled this tag in IIS logging.
  8. time-taken (in milliseconds): This is only available if you have enabled this tag in IIS logging.
  9. Total number of requests or requests by Device ID
  10. Total number of all 4xx status codes
  11. Total number of all 5xx status codes (for more info, see KB: 318380 for IIS 6.0 & KB: 943891)
  12. 409 status codes: 409 (Conflict) - A collection cannot be made at the Request-URI until one or more intermediate collections have been created. The server MUST NOT create those intermediate collections automatically (Ref: RFC 4918)
  13. 500 status codes: After device sends OPTIONS command, it’s possible to get a 500 response back from server with ‘MissingCscCacheEntry’ error. This can happen as a result of an issue with the affinity where you have an Internet-facing CAS array proxying a request to an Internal CAS array. When the Internet-facing array sends the request to the Internal array, a CAS server will answer with the first 401. In the next communication, the request is handled by a different CAS server in the Internal array. Resolving the affinity issue with the Internal CAS array is the solution.
  14. 503 status codes: The server is currently unable to handle the request due to a temporary overloading or maintenance of the server. The implication is that this is a temporary condition which will be alleviated after some delay. If known, the length of the delay MAY be indicated in a Retry-After header. If no Retry-After is given, the client SHOULD handle the response as it would for a 500 response.

    Note: The existence of the 503 status code does not imply that a server must use it when becoming overloaded. Some servers may wish to simply refuse the connection. (Ref: RFC 2616)

  15. 507 status codes: The 507 (Insufficient Storage) status code means the method could not be performed on the resource because the server is unable to store the representation needed to successfully complete the request. This condition is considered to be temporary. If the request that received this status code was the result of a user action, the request MUST NOT be repeated until it is requested by a separate user action. (Ref: RFC 4918)
  16. 451 status codes: Exchange 2007/2010 returns an HTTP 451 response to an EAS client when it determines that the device should be using a ‘better’ CAS for EAS connectivity. The logic used to determine ‘better’ CAS is based on Active Directory sites and whether a CAS is considered ‘Internet-facing’. If the ExternalUrl property on the Microsoft-Server-ActiveSync virtual directory is specified, then that CAS is considered to be Internet-Facing for EAS connectivity. (Ref: TechNet articles Exchange ActiveSync Returned an HTTP 451 Error and Understanding Proxying and Redirection)
  17. TooManyJobsQueued errors: For more info on ‘TooManyJobsQueued’ please refer to KB: 2469722 referenced above
  18. OverBudget: A budget is the amount of access that a user or application may have for a specific setting. A budget represents how many connections a user may have or how much activity a user may be permitted for each one-minute period. (Ref: TechNet article)
  19. Following subset of Common Status Codes:
    InvalidContent, ServerError, ServerErrorRetryLater, MailboxQuotaExceeded, DeviceIsBlockedForThisUser, AccessDenied, SyncStateNotFound, DeviceNotFullyProvisionable, DeviceNotProvisioned, ItemNotFound, UserDisabledForSync

What can you do with this script?

You can process logs using this script to retrieve the following details:

  1. Hits by user/Device ID (users/devices with maximum number of requests sent to server)
  2. Hits per hour/day (helps in determining the frequency of requests sent by user/device, time value is entered in seconds)
  3. Hits by device with specified threshold limit (here you can specify a limit for hits/requests, i.e. all users who are sending 1000 requests per hour/day, etc.)
  4. CSV export of results
  5. HTML report of results
  6. E-mail reports for monitoring (CSV/HTML formats)

Prerequisites:

Please make sure you have the following installed on your machine before using this script:

Script Parameters

ParameterRequiredTypeDescription
ActiveSyncOutputFolder Required System.String CSV and HTML output directory
ActiveSyncOutputPrefix Optional System.String Prefixes string to the output file name
CreateZip Optional System.Management.
Automation.SwitchParameter
Creates a ZIP file. Can only be used with SendHTMLReport
CreateZipSize Optional System.In32 Threshold file size. The Default is 2MB. Once this has been exceeded the file will be compressed. Requires SendHTMLReport and CreateZip to be true
Date Optional System.String Specify a date to parse on. Enter date in the format: MM-DD-YYYY
DeviceId Optional System.String Active Sync Device ID to parse on
DisableColumnDetect Optional System.Management.
Automation.SwitchParameter
Disables the ability to add additional columns to the report that users may have enabled, Example: time-taken

Note: If you are running against multiple files that may have different W3C headers this switch should be used.
Help Optional System.Management.
Automation.SwitchParameter
Outputs switch descriptions
ReportBySeconds Optional System.Int32 Generates the report bases in the value entered in seconds
Hourly Optional System.Management.
Automation.SwitchParameter
Generates the report on a per hourly basis
HTMLReport Optional System.Management.
Automation.SwitchParameter
Creates an HTML Report
HTMLCSVHeaders Optional System.String

IIS CSV Headers to Export on in the –HTMLReport.

Defaults: "DeviceID,Hits,Ping,Sync,FolderSync,DeviceType,User-Agent"

IISLogs Required System.Array

IIS Log Directory.
Example.- IISLogs D:\Server,'D:\Server 2'

LogParserExec Required System.String Path to LogParser.exe
MinimumHits Optional System.Int32 Minimum Hit Threshold value where the report will generate on CSV and HTML
SendEmailReport Optional System.Management.
Automation.SwitchParameter
Enable Email reporting
SMTPRecipient Optional System.String SMTP Recipient
SMTPSender Optional System.String SMTP Sender
SMTPServer Optional System.String SMTP Server
TopHits Optional System.Int32

Top Hits to return.
Example: TopHits 50, This cannot be used with Hourly or ReportBySeconds

How do you use the script?

Below are some examples (with commands) on how you can use the script and why you might use them.

Hits greater than 1000

The following command will parse all the IIS Logs in the folder W3SVC1 and only report the hits by users & devices that are greater than 1000.

.\ActiveSyncReport.ps1 -IISLog "C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -MinimumHits 1000

[In above command, script ‘ActiveSyncReport.ps1’ is located at the root of C drive, -IISLog switch specifies the default location of IIS logs, -LogparserExec switch points to the location of Log Parser executable application file, -ActiveSyncOutputFolder switch provides the location where output or result file needs to be saved, MinimumHits with a value of ‘1000’ is the script parameter explained in the above table]

Output:

image

Usually if a device is sending over 1000 requests per day, we consider this ‘high usage’. If the hits (requests) are above 1500, there could be an issue on the device or environment. In that case, the device & its user’s activity should be further investigated.

As a real world example, in one case we noticed there were several users who were hitting their Exchange server via EAS a lot (~25K hits, 1K hits per hour) resulting in depletion of resources on the server. Upon further investigation we saw that all of those users’ requests were resulting in a 507 error on mailbox servers on the back-end. Talking to those EAS users we discovered that during that time period they were hitting their mailbox size limits (25 MB) & were trying to delete mail from different folders to get under the size limit. In such situations, you may also see HTTP 503 (‘TooManyJobsQueued’) responses in IIS logs for EAS requests as described in KB: 2469722

Isolating a specific device ID

Here the following command will parse all the IIS Logs in the folder C:\IISLogs and will look for the Device ID xxxxxx and display its hourly statistics.

.\ActiveSyncReport.ps1 -IISLog " C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports –DeviceID xxxxxx -Hourly

Output:

image

With the above information you can pick a user/device and see the hourly trends. This can help identify if it’s a user action or a programmatic one.

As a real world example, in one case we had to find out which devices were modifying calendar items. So we looked at the user/device activity and sorted that by different commands they were sending to the server. After that we just concentrated on which users/devices were sending ‘MeetingResponse’ command and its frequency, time period & further related details. That helped us narrowing the issue to related users and their calendar specific activity to better address the underlying calendaring issue.

Another device related command & error to look for is ‘Options’ command and if it does not succeed for a device then the HTTP 409 error code is returned in IIS log.

Isolating a single day

The following command will parse only the files that match the date 12-24-2011 in the folder W3SVC1 and will only report the hits greater than 1000.

.\ActiveSyncReport.ps1 -IISLog "C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -MinimumHits 1000 –Date 12-24-2011

Output:

image

With the above information you can identify users sending high number of requests. Also, within the columns, you can see what kind of commands those users are sending. This helps in coming up with more directed & efficient troubleshooting techniques.

What Should You Look For?

When analyzing IIS logs with the help of script, you should look for one specific command being sent over and over again. The frequency of particular commands being sent is important, any command failing frequently is also very important & one should further look into that. We should also look & compare the wait times between the executions of certain commands. Generally, commands taking longer time to execute or resulting in delayed response from server will be suspicious & should be further investigated. Keep in mind though, the Ping command is an exception as it takes longer to execute and you will see it frequently in the log as well, which is expected.

If you notice continuous failures to connect for a device with en error code of 403 that could mean that the device is not enabled for EAS based access. Sometimes mobile device users complain of connectivity issues not realizing that they’re actually not entering their credentials correctly (understandably it’s easy to make such mistakes on mobile devices). When looking thru logs, you can focus on that user & may find that user’s device is failing after issuing the ‘Provision’ command.

Creating Reports for Monitoring

You may want to create a report or generate an e-mail with such reports and details of user activity.

The following command will parse all the IIS Logs in the folder W3SVC1 and will only report the hits greater than 1000. Additionally it will create an HTML report of the results.

.\ActiveSyncReport.ps1 -IISLog "C:\inetpub\logs\LogFiles\W3SVC1" -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -MinimumHits 1000 -HTMLReport

The following command will parse all the files in the folders C:\Server1_Logs and D:\Server2_Logs and will also email the generated report to ‘user@contoso.com’.

.\ActiveSyncReport.ps1 -IISLog "C:\Server1_Logs",”D:\Server2_Logs” -LogparserExec “C:\Program Files (x86)\Log Parser 2.2\LogParser.exe” -ActiveSyncOutputFolder c:\EASReports -SendEmailReport -SMTPRecipient user@contoso.com –SMTPSender user2@contoso.com -SMTPServer mail.contoso.com

We sincerely hope our readers find this script useful. Please do let us know how these scripts made your lives easier and what else can we do to further enhance it.

Konstantin Papadakis and Brian Drepaul

Special Thanks to:
M. Amir Haque, Will Duff, Steve Swift, Angelique Conde, Kary Wall, Chris Lineback & Mike Lagase


Released: Migrating From Exchange Server 2010 in Hosting Mode to Exchange Server 2010 SP2 whitepaper

0
0

I’m very happy to be able to announce we have just made available for download a guide to help those of you intending to migrate from Exchange in /Hosting mode to Exchange 2010 SP2 installed without use of the /hosting switch.

Like the previous HMC to Exchange 2010 SP2 guidance, it contains a white paper and some PowerShell scripts. The white paper describes the migration process, and the scripts provide a starting point for your own migration toolkit. Of course the exact migration steps and methodology you will need to follow will depend upon what you have deployed, but we hope what we have provided will help you with your efforts and provide you some useful tools and information.

Check out the Migrating From Exchange Server 2010 in Hosting Mode to Exchange Server 2010 SP2 documentation.

We know any cross-forest migration can be tough, and there are also companies out there that provide migration tools and consulting, so if you feel you need more help than the guidance provides, or if you need some form of longer term co-existence, you may want to look at those offerings.

Finally, as discussed several times on this blog, building a multi-tenancy solution is a complex undertaking. We still very much are recommending that you look at existing solutions available in the market today and/or look at engaging solution integration partners to help with your solution. There are several solutions listed on our web site, and more coming, so before trying to re-invent the wheel to build your multi-tenant offering, look at what the market can offer.

Good luck with your migration!

Greg Taylor
Principal Program Manager (though not as awesome as Ross) 
Exchange Customer Experience

Recovering Public Folders After Accidental Deletion (Part 1: Recovery Process)

0
0

Overview

This two-part blog series will outline some of the recovery options available to administrators in the event that one or more public folders are accidentally deleted from the environment. The first part will explain the options, while the second part will outline the architectural aspects of public folders that drive the options.

Introduction

In older versions of Exchange, mailbox and mailbox database recovery was a long, complicated process involving backups, recovery servers, and changes to Active Directory. Successive versions of the product have introduced more and more functionality around recovery (recovery storage groups/databases, database replication, etc.), and we're now at the point where restoring a mailbox is a seemingly trivial operation, and restoring a mailbox database is almost unheard of. But mailboxes aren't the only data stored on Mailbox servers in Exchange Server 2010, and the procedure for restoring public folders and public folder databases differs greatly from the mailbox procedure.

Review of Recovery Options

The first two recovery options are detailed either in TechNet or elsewhere on the Exchange team blog site, so I'll simply list them here and then move on to the real purpose of this blog.  The recovery options for public folders and public folder databases in Exchange Server 2010 are as follows, from the easiest to the most complex:

  1. Recover deleted folders via Outlook (detailed in http://technet.microsoft.com/en-us/magazine/dd553036.aspx).

    Note: Exchange Server 2010 Service Pack 2 fixes an issue where users were unable to use Outlook to recover deleted public folders. This is another reason to upgrade your Exchange Server 2010 systems to SP2 at the earliest opportunity.

  2. Recover deleted folders via ExFolders (http://blogs.technet.com/b/exchange/archive/2009/12/04/3408943.aspx).
  3. Recover folders via public folder database restore.

The first option is the easiest and most obvious - if an end user accidentally deletes a folder, he or she should be able to undelete that folder using Outlook. Failing that, an administrator should be able to use ExFolders to recover that folder. But what if these options won't work in your situation? What if the end user didn't realize he or she deleted the folder, and a month has passed? Or what if your organization has changed the retention settings for deleted public folders, and essentially eliminated the dumpster?  How do you recover public folders in this case?

Recovery Options

At the heart of public folder recovery is a painful truth: you can't delete a public folder from the organization and recover it by simply restoring an older version of a public folder database. If you restore a public folder database from backup and place it back into production, you’ll see the public folders only until the server receives replication messages. Because the public folder hierarchy – the list of all folders in the environment – no longer includes the folders which were deleted, the target server has copies of folders which, from Exchange’s perspective, don’t exist. As soon as that public folder database receives a hierarchy update, it will see that those public folders aren’t present in the hierarchy, and the store will delete the public folder again. Since you can’t edit the hierarchy via the Public Folder Management Console (or even via adsiedit.msc), you can't manually add that public folder back in. So, given this limitation, how do we recover that public folder?

Consider the following points:

  • If you don't replicate every folder to every database, you would need to delete all current databases and then recover from backup any database that contains unique content.  This only works if you have recent backups, of course, and would also require that you export any content generated since that backup, since you’re going to delete all of the existing databases. The deletion is necessary because if a restored public folder store receives hierarchy replication from one of the existing public folder stores, the whole exercise is for naught.
  • If you do replicate all folders to all stores in the environment, you can delete all stores and just restore one database, then replicate the content from that database out to the other servers. Again, this depends on all databases having duplicate content, and you must delete all existing databases before restoring the one from backup.
  • You can restore a backup of the public folder database to an isolated Exchange environment, connect to the public folder database with Outlook, export all content to a series of PSTs, create new folders in the production environment with the same names as the deleted folders, and then import all of the content. This is obviously a somewhat manual process, and most administrators aren't going to want to do this.

Recommended Recovery Procedure

Thankfully there is a much easier process which can be performed in-place and with a minimum of fuss.

  1. Select one of the existing public folder servers in the environment. [Using an existing server simplifies the process a bit.] You will isolate this system from its replication partners, so choose a system that doesn’t serve as the source for a lot of content which needs to be replicated.
  2. Using Registry Editor, set the value of the Replication registry key (HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\MSExchangeIS\<servername>\Public- <GUID of Public Store>) to 0 (zero).

    Note: You may need to create this DWORD key if it doesn’t already exist. Further information on the Replication registry key is available in the article, “Replication does not occur for one Exchange server in the organization” (http://support.microsoft.com/kb/812294). This registry key also applies to Exchange Server 2007 and 2010.

  3. Restore the public folder database in place using your normal restoration procedure.
  4. Using an Outlook client, log onto a mailbox which uses the restored public folder database as its default public folder store (this is necessary in order to see the restored folders). If you don’t have a mailbox database which uses that public folder database as its default, either create a new mailbox database (recommended) or change an existing mailbox database to use the newly-restored public folder database.
  5. If necessary, click the Folders icon at bottom left of the Navigation screen, and then expand the public folders node.
  6. Copy each of the folders you wish to restore to another location within the public folder hierarchy. If you’re restoring an entire hierarchy, you can simply Ctrl-click and drag the root folder to make new copies of all subfolders. Although the new folders will have similar names to the originals, the underlying folder IDs (FIDs) are different.
  7. Once you’ve created copies of all of the folders, verify that the replica lists include all desired targets (and reconfigure as appropriate).
  8. At this point, it’s now safe to reintroduce that server into the production environment. To do so, dismount the public folder database, delete the Replication registry key (or set it to 1), and then remount the database.
  9. As soon as hierarchy is replicated to the server, the original folders will once again disappear, but the copies of the folders will be replicated to all replication partners.

You may need to add mail-enabled public folders back into distribution groups, as their SMTP addresses will likely be different from those on the original folders. End users will also need to recreate public folder favorites in Outlook.

Summary

Recovering from accidental public folder deletion can be difficult, especially if you don’t take hierarchy replication into account. By restoring into an isolated environment, and then cloning the folders to be restored, you can work around this limitation and restore the missing content. In the next blog entry, I’ll explain the underlying architecture of public folders (including replication, change numbers, and the replication state table) to show why these steps are so necessary.

John Rodriguez
Principal Premier Field Engineer
Microsoft Premier Support

Exchange ActiveSync client connectivity in Office 365

0
0

This article explains how mobile devices connect to Exchange Online (Office 365) service and how the connectivity may be impacted if the device does not support certain Exchange ActiveSync (EAS) protocol requirements.

Exchange ActiveSync protocol versions

Most mobile devices that connect to Exchange do so using the Exchange ActiveSync protocol. Each successive version of the protocol offers new capabilities. (The Exchange ActiveSync article maintained by the Exchange community on Wikipedia has more details. -Editor)

Before any device accesses an Exchange mailbox, it negotiates with the Exchange server to determine the highest protocol version that they both support, and then uses this protocol version to communicate. Through the protocol version negotiation, the device and the server agree to behave in a particular manner in accordance with the version selected.

Mailbox redundancy in Office 365

In Office 365, we store multiple copies of user mailboxes, geographically distributed across different sites and datacenters. This redundancy ensures that if one copy of the mailbox fails for some reason (for example due to a hardware failure on a particular server), we can access the same mailbox elsewhere. At any given time, one copy of a particular mailbox is considered active and the remaining ones are deemed passive. When a user connects to their mailbox, they take actions on the active copy, and changes are then propagated to its passive copies.

Mailbox database failover

The switch from one active copy of a mailbox to another one stored on a different mailbox server may happen for the following different reasons:

  • Fail over  If hardware or connectivity failures arise in a site, Exchange 2010 in Office 365 automatically switches (or fails over) to a different mailbox database to ensure continuous access to your mailboxes.
  • Load balancing  If some servers are experiencing higher loads, mailboxes may need to be load-balanced across different servers.
  • Testing or maintenance  Mailbox databases may be switched when we are testing our disaster recovery procedures, or when servers are upgraded.

In most cases, the fail over and load balancing are not scheduled in advance. The process is executed automatically when the need arises, without manual intervention.

Exchange ActiveSync connection process

In Office 365, EAS devices connect to a publicly-facing Exchange Client Access Server (CAS). CAS authenticates the user based upon the provided credentials and retrieves the user’s mailbox version and the mailbox’s location. The mailbox’s location is the Active Directory forest and site where the active copy of the user mailbox is stored.

The CAS will handle the connection in one of the following ways, depending on the mailbox location relative to the location of the CAS:

  • Same forest, same site  If the mailbox is in the same Active Directory site as the CAS, CAS will retrieve the content directly from the Mailbox server.
  • Same forest, different site  If the mailbox is in the same Active Directory forest but a different Active Directory site than the CAS, CAS will redirect or proxy the device to the correct Active Directory site in that forest.
  • Different forest, different site  If the mailbox is located in a different Active Directory forest than the CAS, CAS will act differently depending on the EAS protocol version that it previously negotiated with the device:
    • If the device is using earlier versions of the protocol (EAS 12.0 and below), the connection is proxied to a CAS server in the forest where the mailbox is located.
    • If the device is using more recent versions of the protocol (EAS 12.1 and above), CAS issues a redirection request back to the device pointing it to the specific forest containing the mailbox. The device should then establish a direct connection to the new forest.

For an overview of proxying and redirection, see Understanding Proxying and Redirection in Exchange 2010 documentation.

How do devices choose which site to access?

Phones and tablets connect to Office 365 in a number of ways, depending on the device capabilities, configuration and which protocol version has been negotiated. Specifically:

  • The device may automatically discover the correct mailbox forest based on the user’s email address if the device supports the EAS Autodiscover command.
  • The user may configure the device to access a specific URL:
    • If the user enters the Office 365 endpoint URL for mobile devices (m.outlook.com), this address points the device to a number of forests that are geographically closest to user. The device then connects to one of the returned forests.
    • If the user enters a specific forest URL, the device connects to that forest.
    • If the user enters a specific site URL, the device connects directly to that site.

Office 365 contains a number of Active Directory forests, each of which contains several sites. Each forest has a default front-end site. When a device connects to a forest, it transparently connects to the front-end site for that forest.

Depending on whether the device connects to the Active Directory site where the user’s mailbox is located, the connection logic either retrieves the content directly, or proxies or redirects the device to the correct site.

Issues with redirection

More recent versions of EAS protocol support the redirection command. When a device using a more recent version of the protocol reaches a CAS in a site that doesn't contain the requested mailbox, the server responds to the request by redirecting the device to a CAS in the site hosting the active copy of the user’s mailbox. We assume that devices which advertise to the server support for EAS protocol version 12.1 and later comply with the EAS requirement to support the HTTP redirection error code.

Note: If you want to determine the Exchange ActiveSync protocol version that your device is currently using, refer to your device manufacturer’s documentation.

A problem can occur when a device claims to support redirection, but does not reliably do so. These devices cannot access the mailbox, and the user may receive a number of errors depending on the device (for example, unable to connect to server). A very small number of devices connecting to Office 365 are impacted by this failure to implement Exchange ActiveSync completely (about 1%).

Modifying the Office 365 deployment to compensate for these devices that don’t correctly support redirection would result in a degraded experience for all mobile device users. Performance for the devices is better if they connect to the correct Active Directory site directly after being redirected.

Phones and tablets that are part of the Exchange ActiveSync Logo Program support redirection and thus, do not experience this issue. We are working with a number of other manufacturers to help them support the redirection logic and fix their connectivity issues.

How to fix it?

If your users are having trouble connecting to their Office 365 mailboxes on devices that don’t fully support redirection, use one of the following methods to fix the issue:

  1. Update the Exchange server setting on your device to m.outlook.com as shown in the example below. Then, try connecting to your account and see if this change resolves the issue.
  2. If using the Exchange server name m.outlook.com does not fix the issue:
    1. Sign in to your account using Outlook Web App on a computer.
    2. Click Options in the top right corner and select See All Options… as shown below.
      Screenshot: OWA | See All Options
    3. On the My Account tab (shown below), click Settings for POP, IMAP and SMTP Access…
      Screenshot: Retrieving the Client Access server name from POP, IMAP and SMTP Access settings in Outlook Web App
    4. On the page that opens, under External POP setting you'll see a server name listed.

      Use the Server name on this page for the Exchange server value on your device email configuration.

      Note: Although the setting is listed as the server name for POP, it's also an endpoint for Exchange ActiveSync.

  3. If using m.outlook.com and the External POP Settings/Server name value did not fix the issue:
    1. Go back to the main page of Outlook Web App. In the top right corner, click on the question mark next to Options and then select About as shown below.
      Screenshot: Retrieving the Host name using Outlook Web App
    2. On the About page, you'll see the entry for the Host name listed. Use the value next to the Host name as the server setting on your mobile device.

    Note: When you use the Host name as your Exchange server setting, you may need to update the setting in the future. As I described before, the mailboxes may be moved from one site to another, and devices that do not support the redirect command correctly will lose connectivity. If your user mailbox moves due to failover or upgrades, your site name (Host name) may change and you may need to reconfigure your device to point to the new site.

  4. Another method to resolve the issue may be to try using a different email application on your mobile device. Some EAS applications are able to properly handle redirection even on a device that doesn’t support the redirection command.

More help and resources

Katarzyna Puchala

The title of this post was changed shortly after publishing. The permalink URL may differ from the post title.

Recovering Public Folders After Accidental Deletion (Part 2: Public Folder Architecture)

0
0

Introduction

In the previous blog entry, I explained how to safely recover accidentally deleted public folders from backup. I briefly mentioned some important public folder concepts in that article, and in this, the second part, I’m going to describe some of the inner workings of public folders themselves.  Each organization maintains a list of all public folders in the environment, as well as the locations of all replicas.  This list is called the hierarchy, and it's common to all public folder stores in the environment.  The hierarchy lists all public folders in the environment as well as which servers host replicas of each folder.  Each public folder store has a copy of the hierarchy, and uses it to provide referrals to end users for public folder replicas on other servers (among other things).  Each public folder store also maintains a table, called the replication state table, which keeps track of the status of each folder.  This table is a critical yet little understood feature of public folders, and it has a huge impact on recovery.

Overview

As I said above, each public folder store maintains a replication state table, but unlike the hierarchy, it's unique to each store.  A public folder store maintains information about the public folders for which it has a replica, not just for itself but for all servers with that replica.  It does this so that it knows which other stores have more up-to-date public folder content, or which ones might have items required for backfill replication (catching up on old or missing items).

Imagine the following scenario:  we have three servers, each hosting a public folder database – PFS1, PFS2, and PFS3.  We have a folder – Folder1 – which is replicated to each database.  If I could peer into the replication state table for PFDB1, I would see an entry for Folder1, and that entry would contain information about Folder1's status not on for PFS1, but also for PFS2 and PFS3.  What kind of information does this table actually contain?  To answer that, we need to dig yet further into public folder structure, and talk about CNs.

Change Numbers

CNs – or, to give their full name, change numbers – are numbers assigned to each modification made to content in a public folder.  Think of them as per-folder odometers – they increment each time a change is made to a folder, and only increase, never decrease. Each public folder assigns CNs to the changes made on a given replica, and that information is transmitted to other replicas.  These other replicas use this information to see if they've already received a particular change.  For example, if I make a change to Folder1 on PFS1, that database might assign change number 211 to that modification.  When the public folder database replicates that change to other databases, it records and transmits that change as FID1-123:PFS1:211.  [Folder1 is represented within the public folder database, and by extension in the replication traffic, by a folder ID (FID). This becomes very important later.] PFS2 receives the replication message and checks to see if it has already received CN 211 from PFS1.  If it hasn't, it applies the change and updates its own entry in the replication state table to reflect the fact that it has now received change 211 for Folder1 (FID1-123) from PFS1.  If PFS3 later replicates the same change (FID1-123:PFS1:211) to PFS2, PFS2 will check its list, see that it has indeed already received that change, and discard that particular replication message.

Here’s a sample hierarchy replication message. Notice the CN min, CN max, and FID entries in the description field.

Event Type: Information
Event Source: MSExchangeIS Public Store
Event Category: Replication Outgoing Messages
Event ID: 3018
Description:
An outgoing replication message was issued.
Type: 0x2
Message ID: <23599A0EB070AA92F03E31C546C9C8FFA4F7@contoso.com>
Database "PFDB"
CN min: 1-11D3, CN max: 1-11D4
RFIs: 1
1) FID: 1-38BF, PFID: 1-1, Offset: 28
        IPM_SUBTREE\TestPF

At any given time, each public folder store knows exactly what content it has, and has a general idea of what content the other public folder stores have.  This is an important point - public folder databases are aware of their environment surroundings.  It's this awareness that has implications for recovery.

The Replication State Table

Here’s a quick visualization of how a public folder change is propagated from one server to another. This table simulates the replication state table which is internal to every server. There are four columns – the first represents the replication details (the CNsets), and the next three represent the same folder on each of the three servers. In essence, this table shows you what each server knows about other server’s knowledge of this particular folder. Please note that this is a simplified version of the replication state table – it’s actually quite a bit more complicated than this, but this is all the detail 99.99% of engineers will ever need.

In this example, Folder1 has been replicated to three systems – PFS1, PFS2, and PFS3 – and public folder replication is fully up-to-date. The servers know what they’ve sent to their replication partners, and what’s been replicated back to them. Since end users could conceivably make updates on any of the servers, they each have their own CN sets for the same folder.

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3

PFS1

Last sent CN PFS1:10

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS2:20

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS3:30

An end user connected to PFS1 makes a change, which PFS1 assigned change number 11. The replication state table on PFS1 is updated to reflect this new CN.

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3

PFS1

Last sent CN PFS1:11

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS2:20

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS3:30

PFS1 packages this change (which we assume is the only one made to Folder1) and sends it to PFS2 and PFS3, which update their own replication state tables.

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3

PFS1

Last sent CN PFS1:11

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS2:20

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS3:30

Both PFS2 and PFS3 apply the changes, and since those two systems received the change from PFS1, they also update their “knowledge” of PFS1. Notice that PFS1 does not update its entries for PFS2 and PFS3 – while it has sent the content to them, it hasn’t received confirmation that they’ve applied that change. [Because public folder replication messages are delivered via Hub Transport, public folder stores don’t directly interact and so never assume that the updates were delivered and applied.]

Continuing with our example, an end user makes a change to Folder1 on PFS3:

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3

PFS1

Last sent CN PFS1:11

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS2:20

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS3:31

That change is now replicated to PFS1 and PFS2:

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3

PFS1

Last sent CN PFS1:11

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

Last sent CN PFS2:20

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-31

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-31

Last sent CN PFS3:31

Note that when PFS3 sent out its replication message, it included not only its own update, but also the fact that it had received update 11 from PFS1.

Again, while every server has the most up-to-date content for Folder1, they don’t necessarily know that every replica is up-to-date. [PFS1, for example, “thinks” that PFS2 is out of date, while PFS3 “thinks” that both PFS1 and PFS2 are out of date.] It’s important to note that this isn’t a problem – by only encapsulating status messages in outgoing replication, Exchange avoids saturating the network with constant messages from various servers confirming the receipt of recent replication messages.

Backfill Replication

However, from time to time, a server loses its connection to its replication partners, either through network failure, service failure, or other causes. When it does, its replication state table no longer receives updates to the CNs held by its partners for their replicas. In other words, its replication state table is outdated. When that server reconnects with its partners, and receives a new message, it may find that the CN on that new message is much higher than what it expected. Using the previous example, imagine that PFS3 is isolated from PFS1 and PFS2 due to a server failure, and does not receive updates to Folder1 from the other servers for several hours. The resulting table might look like this:

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3 (OFFLINE)

PFS1

Last sent CN PFS1:16

FID1-123:PFS1:1-16

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-16

FID1-123:PFS2:1-28

FID1-123:PFS3:1-30

Last sent CN PFS2:28

FID1-123:PFS1:1-10

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-31

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-31

Last sent CN PFS3:31

Notice that PFS1 is aware that the most recent replication message from PFS2, for change number 28, also included information about PFS2’s knowledge of PFS1 (namely, that PFS2 receives PFS1’s update numbers 12 to 16). PFS3 has not received any of these recent updates.

However, when PFS3 is brought back online, and receives a new replication message, it suddenly learns of the missing messages. This triggers a backfill request – a request from PFS3 to the source server for the missing content.

Details From

Folder1 on PFS1

Folder1 on PFS2

Folder1 on PFS3

PFS1

Last sent CN PFS1:17

FID1-123:PFS1:1-17

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

FID1-123:PFS1:1-11, 17

FID1-123:PFS2:1-20

FID1-123:PFS3:1-30

PFS2

FID1-123:PFS1:1-16

FID1-123:PFS2:1-28

FID1-123:PFS3:1-30

Last sent CN PFS2:28

FID1-123:PFS1:1-16

FID1-123:PFS2:1-28

FID1-123:PFS3:1-30

PFS3

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-31

FID1-123:PFS1:1-11

FID1-123:PFS2:1-20

FID1-123:PFS3:1-31

Last sent CN PFS3:31

Backfill Request PFS1:12-16

Backfill Request PFS2:21-28

Notice that PFS3 is missing updates 12 through 16 for PFS1, and 21 through 28 for PFS2. PFS3 will request the missing content from any server that it believes has that content, which in this case would mean either PFS1 or PFS2. How does PFS3 know that both servers have the content? Because the replication message from PFS1, which included change number 17, included the information about the CN sets for PFS1, PFS2, and PFS3.

Strictly speaking, Exchange doesn’t issue these backfill requests right away – it waits a few hours (six or more, depending on the situation) before sending them out, just in case one of its replication partners happens to send that missing content. If a specific update hasn’t been received after the backfill timeout is reached, Exchange then generates that backfill request and sends it to the replication partners. This process is detailed in the “Backfill Requests and Backfill Messages” section of the TechNet page on “Understanding Public Folder Replication” at http://technet.microsoft.com/en-us/library/bb629523.aspx#Backfill.

Removing or Deleting Replicas

When you remove a public folder replica, the owning public folder database contacts all other database to find out if they have all of the content that's contained within the replica that's about to be removed.  It does so by sending out a status message that contains the CNs for its replica of the folder. For example, if I were to remove the replica of Folder1 from PFS3, it would send a message to PFS1 and PFS2 confirming that between the two of them, they have every update from PFS3 from 1 to 31. [This is an important point: the content doesn’t need to be on one server. As long as the content exists somewhere in the organization, the replica can be removed.] If PFS3 had any unique content that neither PFS1 nor PFS2 had, it would replicate those items to its replication partners. Once it has confirmed that it no longer has any unique content, the public folder store removes that replica.

However, when you delete a public folder outright (as in, remove all replicas), there's no need to preserve content, so it's deleted from every public folder store.  This is why it’s vital that public folder administrators understand the difference between removing a replica (with Set-PublicFolder -Replicas) and deleting a public folder (with Remove-PublicFolder).

These changes to replica lists and outright deletions are transmitted just like any other public folder change – as hierarchy replication messages, complete with their own CNs.  If I remove the replica of Folder1 from PFS1, that change will go to PFS2 and PFS3 so that they know that they no longer need to replicate new content for Folder1 to PFS1.  Likewise, if I delete Folder1, it will be deleted from all of the databases and removed from the hierarchy as well.  The replication state table keeps track of changes to hierarchy too, and so knows which folders exist in the organization and which don't. It is this tracking mechanism that prevents us from simply restoring a public folder database and reintroducing the deleted folders into the environment.

Recovery of Deleted Public Folders

In part one of this blog, I outlined a process for safely and successfully restoring public folders which were accidentally deleted from the environment. Step six of the procedure reads, in part, “Copy each of the folders you wish to restore. [Although the new folders will have similar names to the originals, the underlying folder IDs (FIDs) are different.]” I’ve added italics to highlight the key point – when you copy (clone) public folders, you’re really creating new folders. They may bear the same name as the originals, but the folder IDs are different. So although my cloned copy of Folder1 may look like the original Folder1, and contain the same items as Folder1, none of the replication messages for the original Folder1 will apply to it, because it’ll have a completely different FID. This new folder is added to the hierarchy, and because end users see the name, not the FID, they’ll simply use it as they would the original folder.

Troubleshooting Replication

If you’re looking for troubleshooting information, look no further than Bill Long’s excellent four-part blog series on public folders:

Summary

Public folders use their own replication mechanism, where changes are tracked in an internal, non-editable table and communicated to replication partners alongside the actual content changes. The public folder hierarchy follows the same principles, and so changes made to the hierarchy are replicated to all public folder databases in the environment. Understanding the replication mechanism helps an administrator understand not only disaster recovery, but troubleshooting as well.

John Rodriguez
Principal Premier Field Engineer
Microsoft Premier Support

Announcing the Exchange Client Network Bandwidth Calculator Beta

0
0

I am extremely pleased to announce that the all new Exchange Client Bandwidth Calculator Beta is available for download!!

Over the past 12 months we have been working on a new calculator to help with Exchange client network bandwidth approximation. This new calculator is based on all new prediction data and is designed to work with both Exchange on-premises and Office 365 deployments! (Yes, we know it’s long overdue!)

What does it do?

The brief was concise and simple for this calculator; we wanted to be able to predict the client network bandwidth requirements for a specific set of users. The calculator needed to deal with Outlook, OWA and Mobile Devices, both on-premises and for Office 365 scenarios.

The following clients are included in this Beta; further clients will be added over time.

  • Outlook 2010
  • Outlook 2007
  • Outlook 2003
  • OWA 2010
  • OWA 2007
  • Windows Mobile
  • Windows Phone

How does it work?

The calculator is based on new prediction algorithms derived after analysing the behaviour of each client individually. This approach allows a bandwidth model to be created for each client scenario which is very scalable and flexible.

Input data is based on existing user profile metrics, such as messages sent and received per user per day and average message size. Once these parameters are provided the calculator is able to predict how much bandwidth each client will require to perform adequately.

The predictions provided represent the requirements during the busiest two hours of the working day.

Why a Beta?

The new prediction algorithms have been created from scratch and validated for accuracy internally; however we would like to gather some more telemetry data from real world scenarios to fine tune the calculator prediction formulae. During the Beta process we would love to hear your feedback and suggestions for the calculator. If you can provide real world prediction vs. observations data for your infrastructure that would also be extremely welcome!

Suggestions and feedback requests should be sent to netcalc@microsoft.com

The goal is to complete the Beta process by mid-2012.

How do I use it?

The calculator is split into two main sheets in Excel.

  • Input – A place to enter organization information and usage profile information
  • Client Mix – A place to enter how many clients of each type and profile exist in each site

There is an accompanying manual that explains things in more detail, so I will only take a quick look here.

The Input Sheet

The input sheet is broken up into five sections;

  1. Organization Data
  2. User Profile 1
  3. User Profile 2
  4. User Profile 3
  5. User Profile 4

The Organization data section represents global settings that apply for the entire organization and the user profiles are pre-defined profiles that represent sets of users from light through to very heavy. The user profiles are customizable and should be edited to reflect your own environment for an accurate prediction.

1

The Client Mix Sheet

Once you have completed the Input Sheet, you can move on to the Client Mix sheet. This is where you can list out the number of each client and define your sites. The sheet is made up of three sections;

  1. Site Definition
  2. Client Definition
  3. Network Predictions

The site definition section allows you to configure a representative model of your physical network site topology; this should represent physical sites and the expected user usage profile for that site. The Client definition section allows you to configure how many users will exist at each site and which type of Exchange client they will be using. The Network predictions section shows the predicted network requirement for each defined site.

1

An Example

To make things a little easier I am going to walk through a very basic example to get us started.

In this example we have a customer who is moving to Office 365. They want to know how much Internet bandwidth will be required to support their Exchange clients after the migration is completed.

Organization Information

  • 3 Main sites (3650 users)
    • London:
      • 1500 Outlook 2007 Users (Medium Profile)
      • 300 Outlook Web Access users (Light Profile)
    • Manchester:
      • 600 Outlook 2007 Users (Heavy Profile)
      • 150 Outlook Web Access users (Light Profile)
    • Paris:
      • 1100 Outlook Web Access (Light Profile)

London and Manchester share the same Internet connection, but Paris has its own local breakout.

Note: For this example I am going to use the built-in user profile data to keep things simple, however it is strongly recommended to define your own user profile data based on research into your messaging solution.

The first thing we need to do is to configure the Input Sheet. The defaults are pretty good in this example, but the OAB size is actually 10MB rather than 100MB so I will set the Offline Address Book Size to 10MB.

1

The user profiles I would usually edit, but in this case I will leave them at their default settings and move on to the Client Mix sheet. The Client Mix sheet will give us totals, so I generally group together sites that share the same internet connection. In this instance that means we can put London and Manchester on the same sheet but we need a new sheet for Paris. To make a copy of the sheet, right click on the tab at the bottom and select Move or Copy; in the Move or Copy Dialog highlight the Client Mix sheet, tick the box to Create a copy and then click OK.

1

1

Your Excel workbook will now contain “Client Mix (2)” and “Client Mix” – I generally rename these to something meaningful, in this instance I am going to rename one UK Sites and the other FR Sites.

1

We will begin by defining the sites in the UK sites sheet. The information we have suggests that we have two sites, London and Manchester and that there are two user profile types in each site. Since we can only use a single user profile site this means we are going to need four site entries…

1

We then need to define the types of clients that will exist in each site. I have hidden some rows and columns that we don’t need to make the data easier to read. I have entered the number of each client type into the sheet – we know that it must be OA-cached for Outlook 2007 since Office 365 only provides an Outlook Anywhere connection and we know that OWA must be 2010 since again, Office 365 is based on Exchange Server 2010.

1

If we hide some more cells we can take a better look at the prediction values.

1

Firstly we are generally interested in the “Exchange to Client” requirements since they are higher and most links are still provided with the same upload and download capacity. Where you have an asynchronous line then you may need to look at the “Client to Exchange” bandwidth also. In this example the customer has a synchronous connection.

London has two sets of users defined and the calculator predicts that the Outlook users will need 3.66Mbits/sec of bandwidth and that the OWA users will require a further 0.93Mbits/sec. The total for London is 4.59Mbits/sec (you need to do this manually in this case).

Manchester also has two sets of users defined and the calculator predicts a total of 3.53Mbits/sec.

Since both Manchester and London share the same internet connection, the calculator is predicting that the customer will need to ensure that 8.12Mbits/sec of network bandwidth is available to support this workload and that the maximum network link latency is 320ms or less.

If we repeat this for our FR Sites tab…

1

The calculator predicts that the internet connection in Paris will require 1.88Mbits/sec of available network bandwidth to support their 1100 Office 365 OWA users.

This is obviously a fairly simple example but I would encourage you to model your own organization to get a feel for the calculator and provide feedback on how the calculator is working for you.

What doesn’t it do?

The calculator does not provide information on the following…

  • Non-Microsoft clients: You will need to speak to the specific vendors to get bandwidth information for their clients.
  • BlackBerry: I know this is a non-Microsoft client but everyone asks about it! You will need to speak with BlackBerry to get this data.
  • Server-side Bandwidth Data: Data such as SMTP, DAG replication, ADFS 2.0, and authentication etc. are all out of scope for this calculator.
  • Outlook 2011 / Entourage EWS: These clients are being analyzed currently and will be added during the Beta timeframe.
  • Migration Traffic: The calculator predicts steady state traffic requirements
  • Outlook 2000 and older: Outlook 2003 is the oldest client included in this calculator

Feedback and Other Stuff…

We have published other network bandwidth guidance on TechNet, the most commonly used guidance is in the White Paper: Outlook Anywhere Scalability with Outlook 2007, Outlook 2003 and Exchange 2007. This guidance was produced using Loadgen during lab test of CAS scalability. The predictions from this testing vary slightly from those in the new calculator due to the way the data was gathered in each case. Since the newer test data was specifically generated and analyzed to enable network bandwidth prediction the newer values should be more precise; the new calculator also takes into account many more variables and user profiles than the guidance in the Outlook 2007 White Paper, so again this should provide a more accurate prediction.

During the Beta process we recommend that you use both the old white paper and the new calculator to determine your requirements.

As I said at the beginning of this post (which is now much longer than I wanted it to be!), I am really interested to hear feedback from you after using the calculator; positive, negative and requests for help or feature requests etc. Send your feedback (please be nice!) to…

netcalc@microsoft.com

I will be writing some more posts regarding this calculator over the coming months, with more examples and a deep-dive that explains how the prediction data was generated.

Thanks for reading and I hope you find this new calculator useful.

Neil Johnson
Senior Consultant, MCS UK

Released: Update Rollup 1 for Exchange 2010 Service Pack 2

0
0

Earlier today the Exchange CXP team released Update Rollup 1 for Exchange Server 2010 SP2 to the Download Center.

This update contains a number of customer-reported and internally found issues since the release of RU1. See KB 2645995: Description of Update Rollup 1 for Exchange Server 2010 Service Pack 2' for more details.

Note: If some of the following KB articles do not work yet, please try again later.

We would like to specifically call out the following fixes which are included in this release:

  • New updates for Dec DST - Exchange 2010 - SP2 RU1 - Display name for OWA.
  • 2616230 Exchange 2010 CAS server treats UTF-7 encoding NAMESPACE string from CHS Exchange 2003 BE server as ASCII, caused IMAP client fails to login.
  • 2599663 RCA crashes when recipient data is stored in bad format.
  • 2492082 Freebusy publish to Public Folders fails with 8207 event.
  • 2666233 Manage hybrid configuration wizard won't accept domains starting with a numeral for FOPE outbound connector FQDN.
  • 2557323 "UseLocalReplicaForFreeBusy" functionality needed in Exchange 2010.
  • 2621266 Exchange 2010 Mailbox Databases not reclaiming space.
  • 2543850 Exchange 2010 GAL based Outlook rule not filtering emails correctly.

General Notes:

For DST Changes: http://www.microsoft.com/time.

Note for Forefront Protection for Exchange users  For those of you running Forefront Protection for Exchange, be sure you perform these important steps from the command line in the Forefront directory before and after this rollup's installation process. Without these steps, Exchange services for Information Store and Transport will not start after you apply this update. Before installing the update, disable ForeFront by using this command: fscutility /disable. After installing the update, re-enable ForeFront by running fscutility /enable.

Exchange Team

Geek Out With Perry on immutability of email data

0
0

As we’ve seen with previous episodes of Geek Out with Perry and on Perry Clarke’s blog, email archiving can be a heated and controversial topic. It’s one that people are very passionate about – including the folks on the Exchange team and Perry himself. We’ve already covered tiered storage and stubbing as well our archiving methodology in previous blogs and videos but Perry’s new post and video takes on another common question: “How does Exchange help me with immutability of my email data?” Read his blog and watch the video to see his take on what immutability is and how Exchange can help customers with their compliance requirements. For additional details on achieving immutability, you can also check out our immutability whitepaper.

We’ve also heard feedback recently that some of you would like alternate ways to view the Geek Out with Perry video series. Ask and ye shall receive! We now have two options for you to view Geek Out with Perry:

  • The Exchange YouTube channel, which features other awesome Exchange videos you should check out. To view the entire Geek Out with Perry playlist, click here.
  • The MSN Video catalogue which hosts all of the Exchange TechNet videos. To view the entire Geek Out with Perry playlist from that channel, click here.

We love geeking out on Exchange topics and want to hear your feedback and questions. Please let us know if you have other subjects you’d like to have Perry geek out on.

Cheers!

Ann Vu


Exchange 2010 SP2 RU1 and CAS-to-CAS Proxy Incompatibility

0
0

We wanted to give you a heads up regarding a change in CAS to CAS proxy behavior between servers running Exchange 2010 SP2 RU1 and servers running older versions of Exchange.

The SP2 RU1 package introduced a change to the user context cookie which is used in CAS-to-CAS proxying. An unfortunate side-effect is a temporary incompatibility between SP2 RU1 servers and servers running earlier versions of Exchange. The change is such that earlier versions of Exchange do not understand the newer cookie used by the SP2 RU1 server. As a result, proxying from SP2 RU1 to an earlier version of Exchange will fail with the following error:

Invalid user context cookie found in proxy response

The server might show exceptions in the event log, such as the following:

Event ID: 4999
Log Name: Application
Source: MSExchange Common
Task Category: General
Level: Error
Description: Watson report about to be sent for process id: 744, with parameters: E12, c-RTL-AMD64, 14.02.0283.003, OWA, M.E.Clients.Owa, M.E.C.O.C.ProxyUtilities.UpdateProxyUserContextIdFromResponse, M.E.C.O.Core.OwaAsyncOperationException, 413, 14.02.0283.003.

Not all customers are affected by this. But since we received a few questions about this, we wanted to let you know about the change. Many Exchange customers do not use proxying between Exchange 2010 and Exchange 2007 but rather use redirection, which is not affected by the change. However, if you are using CAS-to-CAS proxying, where an Exchange 2010 SP2 RU1 Client Access server is proxying to an earlier version of Exchange 2010 or Exchange 2007 Client Access server, then you are affected by the change.

If you are affected, it is important to note that this issue is temporary and will exist only until all of the CAS involved in the CAS-to-CAS proxy process are updated to Exchange 2010 SP2 RU1. Thus, if you are affected by this problem, simply deploy SP2 RU1 on the relevant Exchange 2010 servers and the issue no longer exists.

If you use CAS-to-CAS proxy between Exchange 2010 and Exchange 2007, we will have an interim update (IU) for Exchange 2007. Availability of the IU will be announced on this blog.

Server proxy version Server being proxied to Action to take
Exchange 2010 SP2 RU1 Any version of Exchange 2010 older than SP2 RU1 Apply Exchange 2010 SP2 RU1 to all servers involved in proxy process
Exchange 2010 SP2 RU1 Exchange 2007 Hold off deployment of Exchange 2010 SP2 RU1 until you deploy the Exchange 2007 IU

The Exchange Team

It Takes a Long Time…

0
0

Following our recent announcement of the release of Update Rollup 1 for Exchange 2010 Service Pack 2 you will see we released a ton of fixes and I wanted to blog about one specifically, and maybe at the same time provide some background into how issues like these come about and how we go about fixing them.

The specific fix is one cunningly referred to as 2556113, with the title, It takes a long time for a user to download an OAB in an Exchange Server 2010 organization.

With a title like that you might be thinking that we simply figured out a way to make OAB downloads ‘faster’. You might start thinking that we did that by just deleting randomly some of the users in the OAB, those you don’t know, the people working in accounting on the fourth floor, for example. Or perhaps we had tried to reduce the details we included in the OAB, perhaps by just removing unnecessary information like family names, office location or phone numbers. Or maybe we simply increased the speed of the Internet. Because that’s really easy.

Well, we didn’t do those; (though we are looking into that whole Internet thing to see what we can do about it, as it sounds awesome) we instead added some logic to ensure that Outlook tries to download the OAB from a CAS closest to itself.

“Why?” you ask. Well, it’s a good question and I reply with “As the KB article says, ‘Consider the following scenario….”

  • You have two Active Directory sites on a slow network in a Microsoft Exchange Server 2010 organization.
  • You have an Exchange Server 2010 Client Access server and an Exchange Server 2010 Mailbox server in one Active Directory site.
  • You have an Exchange Server 2010 Client Access server and add an Office Outlook user in the other Active Directory site.
  • The user whose mailbox is located in the different Active Directory site tries to download the Exchange Offline Address Book (OAB).

In this scenario, it takes a long time to download the OAB.

Well yes. No kidding. It really can. If you have a large OAB, it can really, really take a long time. But let’s expand on the scenario a little, as frankly there’s a bit of information I think you need to know, and having an AD site with nothing but a CAS in it doesn’t seem like a very smart move to most people.

So consider this more detailed scenario instead;

  • You have a centralized deployment. All mailboxes are in one central location.
  • You have lots of small locations where people touch down and work.
  • These locations are connected to the central site with poor networks. Satellite, ISDN, PSTN, tropospheric scatter (I had a customer with one of these once. Brilliant. Until there was a storm), wet piece of string, etc.
  • Your OAB is big. It is large. It is not small. Take your pick of the definition you like best. Suffice to say, it’s of significant size that you care.
  • Your Outlook client tries to download the OAB, and it comes from the central datacenter. So does the Outlook client being used by the person sitting next to you, and the funny looking guy over there in the corner too. All of you are downloading the same OAB. Over the same wet piece of string. It’s getting very slow.

With luck you can see that you are all competing for the same bandwidth, while also trying to work, and even though the BITS client technology used for OAB downloads is good, it’s not really going to help you much.

So you add a CAS to each remote location. In fact, as the diagram detailed in http://technet.microsoft.com/en-us/library/bb232155.aspx suggests. The idea being that the client computer will download the OAB it needs from the local CAS. Well, it might sound like a great idea – but that’s not how Exchange has ever worked. Prior to 2010 SP2 RU1 that is…

How did it work then? And why am I telling you that TechNet lied to you?

Well to answer the first question, the URL the client uses to download the OAB from is provided to the client by the AutoDiscover service. And the AutoDiscover code has always picked a URL for the OAB you should be downloading from the AD site that your mailbox is in, not the AD site your client computer is in.

To answer the second of those questions, you need to first understand that TechNet is never wrong (my friends in UE, like Scott Schnoll get real touchy if you imply their articles are incorrect).  It’s just that sometimes it isn’t right from a certain point of view, either.  TechNet details this as it was part of the original PM specification back when 2007 was being designed. I probably shouldn’t have told you that, but heck, it was. And it didn’t get done. These things happen in a software product with over 20 million lines of code you know when stuff changes all the time. TechNet doesn’t usually lie. Well, not much.

Back to how it works. Just think about it for a moment. You have a 1 GB OAB. And you add a replica of that OAB to a CAS in the remote and distant AD site, where the users are. However they never use it. (Ok, unless their mailboxes are also in the same AD site but that’s not the scenario is it?). That kind of sucks doesn’t it. Yes, it does I hear you say. It looks a bit like this diagram.

image

Outlook uses the CAS closest to the client computer for the client’s AutoDiscover requests (well, it should, and we’ll come back to that in a moment) but the OAB URL it hands back is for the CAS in the same AD site as the mailbox. So even though we are replicating the OAB to AD Site B, the client pulls the OAB from AD Site A.

So, a large customer with lots of small sites and a whopping OAB tells us this won’t work and downloads are killing whatever WAN bandwidth they have. So, what can we do about this? It turns out there are a few ways to solve this, and I have to add that this is one of the fun bits of my job, trying to figure this kind of thing out. It’s a nerd thing.

  1. They could reduce the size of their OAB, speed up their WAN, move the remote offices closer etc. None of these will fly for them as a solution. Though we did ask.
  2. We could create lots of OABs that have the same content. And specify on a per-user, or per-database level the OAB the user should download. And then we only have that OAB available in the remote location. Therefore AutoDiscover will provide the only URL it can for it, in the remote location. Now this sounds good, except the users move from site to site. And a download then would mean a double slow network hop. Ouch. Scratch that.
  3. Same thing with mailboxes – move the mailboxes to the remote locations… well, they move around plus that would really complicate administration and High Availability and consequently increase cost.
  4. We could do some kind of reverse IP address to AD site mapping thing. Now I believe this was the original way we had planned to solve this, and it’s actually kind of hard. It’s hard because you need to ensure all subnets a client could come from are in AD Sites and Services, and then try and reverse engineer the AD site the user is in, and then look at site link costs and …you get the idea I hope. It’s complex, and defeated by NAT, or if the admin doesn’t list every possible subnet in AD Sites and Services.
  5. We could ‘interfere’ with DNS or the AutoDiscover XML to try and make the client think it is talking to the centralized location but in fact be talking to a local IIS instance. Again, it’s hard, tricky to implement and support and just plain ugly if you’re asking.
  6. Something else. I picked this one, as the others seemed really hard.

So cast your mind back just a few short paragraphs to the sentence that stated “Outlook uses the CAS closest to the client computer for the client’s AutoDiscover requests”, the one that I said I would come back to. Well, it is worth returning to because of something called AutoDiscoverServiceSiteScope.

AutoDiscoverServiceSiteScope is a CAS setting that helps the Outlook client map AD sites to CAS for the purposes of finding the closest CAS to the client for AutoDiscover requests. He does this by seeking out Service Connection Points (SCP’s) which are in fact pointers to the AutoDiscover service.

Here’s how it works. When an Outlook client starts up he heads off to the triangle, sometimes and otherwise known as ‘AD’, and looks for all the SCP’s put there by Exchange setup. He finds a bunch (we hope), and on each is an attribute, the Keywords attribute, which is set/changed/sometimes messed up by the use of Set-ClientAccessServer –AutoDiscoverServiceSiteScope: ADSiteNameA, ADSiteNameB, etc. The Keywords attributes is used to specify which AD sites this CAS is responsible for, for AutoDiscover requests.

When the Outlook client finds more than one SCP he builds himself a list of usable SCP’s by comparing the value stored on the Keywords attribute with his own AD site (which is dynamically updated by the local Netlogon service, when he starts up or changes IP address).

He then builds one list. Either all those that match his AD site (where Keywords attribute = client AD Site) or, if there are none, he puts every SCP in the list. These are the servers he can use for his AutoDiscover requests.

He then starts at the top of the list (which is always in the same order by the way, by date of install) and tries to connect to the URI contained within the ServiceBindingInformation attribute – which is the location of the AutoDiscover service itself. He then posts XML, gets a response etc., and then lives happily ever after. More details for all this good AutoDiscover stuff can be found here.

Why is this interesting? Well this AutoDiscoverServiceSiteScope thing helps Outlook find the CAS closest to the client’s location, assuming the admin has set up the site scopes correctly (and we do tell admins how to do that). So we really don’t need to figure out which CAS is closest to the client once we get the request, as that has already happened by the time the request reaches CAS.

Once that request hits CAS we figure out the settings to return to the client – but then we always forget one thing – that the OAB the user needs, could be local to the CAS we are executing the request on, and instead, we always gave the user a URL from a CAS way, way, over there. And that’s what we needed to fix.

The solution for this is therefore theoretically very simple and it means we don’t have to invent a new way to figure out the closest CAS to the client, as we already have one which works quite well thank you very much.

If we were to make the assumption that the admin has set up AutoDiscoverServiceSiteScope correctly, the CAS the client connects to for AutoDiscover will be the CAS closest to the client. If this assumption holds true, the CAS, when figuring out what to return in the AutoDiscover XML needs to simply check to see if he himself has a copy of the OAB the user should be using – and if so, he simply provides his own OAB URL. Not that for a CAS in the AD site where the user’s mailbox is located. Of course if he doesn’t have a copy of the OAB the user needs, the old behavior should prevail, meaning the CAS will return the OAB URL of a CAS in the Mailbox AD site.

So basically the picture changes to look like this;

image

Now that’s much friendlier to the WAN isn’t it? One copy replicates over the WAN and all clients in that location will now get the OAB from the CAS local to them.

What do you have to do to get this new behavior to kick in? Just two things. Deploy SP2 RU1 on the CAS, and ensure that your AutoDiscoverServiceSiteScope parameters are set up correctly.

I hope you find this useful, and may your WAN forever be a long fat pipe.

Greg Taylor
Principal Program Manager
Exchange Customer Experience

CalCheck - The Outlook Calendar Checking Tool

0
0

Over the past year or so I have been working on this tool - adding functionality and checks based off my experience as an Outlook engineer, and from suggestions given by other engineers. Well, this February the tool has been released so that all our customers can download and use it to check for potential problems in their calendars - which will hopefully be a real time saver when you encounter a problem with your Outlook Calendar - or with a user’s Outlook Calendar in your organization.

Installation

Download CalCheck from the Microsoft Download Center.

This utility works with:

  • Microsoft Office Outlook 2003
  • Microsoft Office Outlook 2007
  • Microsoft Office Outlook 2010 (32-bit)
  • Microsoft Office Outlook 2010 (64-bit)
  • Microsoft Exchange Server 2003
  • Microsoft Exchange Server 2007
  • Microsoft Exchange Server 2010

Note: The 64-bit version of this tool is only for use with the Microsoft Outlook 2010 64-bit version.

The download is a ZIP file - just unzip it in an empty directory, open a command window in that directory, and run it.

What CalCheck does

The Calendar Checking Tool for Outlook (CalCheck) is a command-line program that checks Microsoft Outlook Calendars for problems. The tool opens an Outlook profile to access the Outlook Calendar. It performs various checks, such as permissions, free/busy publishing, delegate configuration, and automatic booking. Then each item in the calendar folder is checked for known problems that can cause unexpected behavior, such as meetings that appear to be missing.

As CalCheck goes through this process, it generates a report that can be used to help diagnose problem items or identify trends.

Checks performed

The following Calendar-specific checks are performed and logged in the report:

  • Permissions on the Calendar
  • Delegates on the Calendar
  • Free/Busy publishing information
  • Direct Booking settings for the Mailbox or Calendar
  • Total number of items in the Calendar folder

The following item-level checks are performed and logged in the report:

  • No Organizer email address
  • No Sender email address
  • No dispidRecurring property (causes an item to now show in the Day/Week/Month view)
  • Time existence of the dispidApptStartWhole and dispidApptEndWhole properties
  • No Subject for meetings that occur in the the future or for recurring meetings (a warning is logged)
  • Message Class check (a warning is logged)
  • dispidApptRecur (recurrence blob) is checked for time on overall start and end times, not for exceptions
  • Check for Conflict items in the Calendar
  • Check for duplicate items, based on certain MAPI properties
  • Check if over 1250 recurring meetings (a warning is logged) and 1300 recurring meetings (an error is reported); 1300 is the limit
  • Check if you are an attendee and you became the Organizer of a meeting
  • Check meeting exception data to ensure it is the correct size

Server Mode

You also have the option to run CalCheck in Server Mode. In Server Mode, CalCheck attempts to open all mailboxes on the Exchange server and perform the checks listed in the "Checks Performed" section of this article. Server Mode generates a CalCheckSvr.log file, which lists the mailboxes that have errors. Additionally, CalCheck generates a separate CalCheck__.log file for each mailbox. This log file shows more mailbox-specific detail.

To use Server Mode, you must use a messaging profile associated with an account that has permissions to all of the mailboxes on the specified Exchange server. To run server mode, use the “-S” command-line switch.

Example

Running to check a single mailbox/calendar:

image

If you don’t specify a profile on the command line - then you will be prompted to choose a profile as in the above screenshot.

Once you have chosen your profile - the tool will run - and you will see similar output as long as everything is successful:

image

Looking at this window shows you that there is a CalCheck.log, and where to go and find it. Opening that will show some info like the following:

02/17/2012 05:09:20PM Calendar Checking Tool - Version 1.0
02/17/2012 05:09:20PM ====================================
02/17/2012 05:13:45PM Opening mailbox: Mailbox 02/17/2012 05:13:45PM /O=Org/OU=OU/cn=Recipients/cn=Mailbox
02/17/2012 05:13:45PM Local time zone: Eastern Standard Time 02/17/2012 05:13:45PM Successfully opened the Calendar folder. 02/17/2012
05:13:45PM Processing calendar for Mailbox
02/17/2012 05:13:46PM Successfully located and opened the local free busy message for this mailbox.
02/17/2012 05:13:47PM Publishing 2 month(s) of free/busy data on the server.
02/17/2012 05:13:47PM Resource Scheduling / Automatically accept meeting requests is disabled.
02/17/2012 05:13:47PM ====================================
02/17/2012 05:13:47PM Delegates for this mailbox:
02/17/2012 05:13:47PM ===========================
02/17/2012 05:13:47PM No delegates are set.
02/17/2012 05:13:47PM ===========================
02/17/2012 05:13:47PM Permissions on this Calendar:
02/17/2012 05:13:47PM =============================
02/17/2012 05:13:47PM Default: None
02/17/2012 05:13:47PM Manager: Reviewer
02/17/2012 05:13:47PM Coworker1: None
02/17/2012 05:13:47PM Coworker2: Reviewer
02/17/2012 05:13:47PM Coworker3: Reviewer
02/17/2012 05:13:47PM =============================
02/17/2012 05:13:48PM Found 1404 items in the Calendar. Processing...
02/17/2012 05:13:48PM WARNING: No Subject on this item. You may want to add a Subject to this item.
02/17/2012 05:13:48PM Properties to help investigate this reported item: 02/17/2012 05:13:48PM Subject:
Location: No subject on recurring item
Start Time: 01/11/2011 10:00:00PM
End Time: 01/11/2011 10:30:00PM
Last Modifier: Mailbox
Last Modified Time: 02/04/2011 02:48:08PM
Is a recurring appointment: true
Sender Name: Mailbox
Sender Address: /o=Org/ou=OU/cn=recipients/cn=Mailbox
Organizer Name: Mailbox
Organizer Address: /o=Org/ou=OU/cn=recipients/cn=Mailbox
Recurrence Start: 12:00:00.000 AM 1/11/2011
Recurrence End: 12:00:00.000 AM 2/1/2011
Recurrence End Type: End After X Occurrences
Number of Exceptions: 0x0000
 
02/17/2012 05:13:50PM ERROR: Detected a duplicate item in the Calendar. Please check this item.
02/17/2012 05:13:50PM Properties to help investigate this reported item:
02/17/2012 05:13:50PM Subject: Doctor appointment
Location: Doctor’s Office
Start Time: 03/04/2012 04:30:00PM
End Time: 03/04/2012 06:00:00PM
Last Modifier: Mailbox
Last Modified Time: 08/01/2011 06:29:05PM
Is a recurring appointment: false
Sender Name: Mailbox
Sender Address: /o=Org/ou=OU/cn=recipients/cn=Mailbox
Organizer Name: Mailbox
Organizer Address: /o=Org/ou=OU/cn=recipients/cn=Mailbox

For problem items that are found - the report gives you information you can use to go and find the problem items so you can remove it, recreate it, or if possible - fix it, etc.

Command Switches - and what they do

CalCheck [-P ] [-M ] [-S ] [-A] [-F] [-R] [-V] [-No] CalCheck -?
 
-P Profile name (If this parameter is not specified, the tool prompts you for a profile)
-M Mailbox DN (If this parameter is specified, only process the mailbox that is specified)
-S Server name (Process the complete server unless a mailbox is specified)
-A All calendar items are output to CALCHECK.CSV
-F Create a CalCheck folder, and move flagged error items to the folder
-R Put a Report message that contains the CalCheck.log file in the Inbox
-V Verbose output to the Command Prompt window
-No To omit a calendar item test
The No parameter works with "org" to omit the “Attendee becomes Organizer” test and works with "dup" to omit duplicate item detection
-? Print this message

Some additional tips about specific switches:

“-M” You must use the legacyExchangeDN for the mailbox, and the profile you use must be for a mailbox that has permission to open that other mailbox.

“-A” Will create a CSV file that includes all calendar items - one in each row. There will be several properties listed for each item that can be used to look for problems not detected by the tool:

image

You can view all items in the Calendar by opening the CSV in Excel. You can sort and filter items based on things like start time, subject, recurring items, etc. This can be useful for finding problems that can’t be detected by CalCheck, or that currently aren’t looked for by CalCheck. If you find a problem item in the CSV, you can open the Calendar and put it into Category view to get a similar view of the Calendar in Outlook.

To do this, in Outlook click the View tab, click the Change View drop down, and choose By Category. This will give a view of the Calendar like the following:

image

image

This view shows all the items in the Calendar as a list - similar to looking at emails in the Inbox folder. You can sort on things here like Subject, Location, Start, and End. This can be used to find the problem item in the Calendar folder when it is difficult or impossible to find in the normal Calendar view.

“-F” Will create a CalCheck folder in your folder list, and will move items marked as an Error to that folder:

image

Items can easily be moved back to the Calendar, or can be deleted from here if not needed, or corrected if possible and then placed back in the Calendar. The general rule of thumb would be to recreate the item and delete the item that was moved out to the CalCheck folder.

“-R” Will create a mail message in the Inbox folder with the CalCheck.log file attached to it. This is useful when running the tool in Server mode - as each user will get their report in their Inbox:

image

“-No” There are two of these: “-No org” and “-No dup”:

The “-No org” will omit the check for the “attendee becomes the organizer of the meeting” check. Part of this check uses the legacyExchangeDN of the mailbox. If the legacyExchangeDN has changed for any reason - like a migration - then this test will give errors for items that may not really be in error. The error that is logged by CalCheck will show both DNs. Here is an example:

12/21/2011 05:27:25PM ERROR: dispidApptStateFlags is 1, but the address for this mailbox does not match the organizer address.
12/21/2011 05:27:25PM Check to ensure the Organizer Address is correct, and whether or not this user should be the organizer.
12/21/2011 05:27:25PM Organizer Address: /o=Org1/ou=admin group 1/cn=recipients/cn=user1
12/21/2011 05:27:25PM DN for this user: /o=Org2/ou=admin group 2/cn=recipients/cn=user1
12/21/2011 05:27:25PM See KB 2563324 for additional information: http://support.microsoft.com/default.aspx?scid=kb;EN-US;2563324
12/21/2011 05:27:25PM Properties to help investigate this reported item: 12/21/2011 05:27:25PM Subject: Test

The mailbox here is the same actual mailbox - but because the legacyExchangeDN changed - it is marked as an error.

The “-No dup” will omit the duplicate item detection - as this test creates an in-memory list of items and tests each item against that list. This can slow the process down a bit due to the extra processing and memory usage.

What CalCheck does not do

  • CalCheck is a reporting tool only. It will not automatically modify or “fix” any items. It will move items detected as error items to the CalCheck folder if the “-F” switch is used, but otherwise no changes will be made to any items.
  • CalCheck only works against Calendars located on an Exchange server. It will not work against other servers, such as IMAP or POP3, etc.
  • CalCheck can’t find every kind of corruption that can possibly happen to a Calendar item. However - it can find many known problems that can be knocked out without having to spend time combing through a Calendar and/or contacting a help desk.

Feedback

Please leave feedback! The best avenue for that is on http://calcheck.codeplex.com/discussions

If you have a problem with CalCheck - you can post information about it on http://calcheck.codeplex.com/workitem/list/basic

Thanks - and I hope this will help save time in diagnosing and resolving calendar issues for you!

Randy Topken
Senior Escalation Engineer
Outlook team

ESE Access to Exchange: Spamcops!

0
0

It’s not easy being a spam cop. But the folks on the Forefront Online Protection for Exchange (FOPE) team love it!

Their passion for being investigators and transport experts has translated into measurable impacts to help safeguard customers’ inbound, outbound and internal business mail from spam, viruses, phishing attacks, out-of-policy content and help customers focus on being productive. FOPE processes over a billion messages worldwide every day, including for Exchange Online customers. This team works hard to offer five financially backed SLA’s including 100% known virus, 98% antispam protection and 99.999% uptime.

This service goes largely unnoticed as long as the mail keeps flowing and the FOPE team works hard to help ensure that. One of our Senior Program Managers, Alexander Nikolayev, recounted that the team was able to proactively detect an organized attack mounted during the US Thanksgiving holiday and were able to counter the malicious behavior while many Americans were eating their turkey dinners. When customers went back to work the next day, they didn’t notice anything other than normal email in their inbox.

We wanted to share stories like these and some of the team’s passions with all of you in the next installment of our ESE Access to Exchange video series.

In the video, Terry Zink discusses taking a statistical approach to the way we approach designing our IP block lists and we wanted to further elaborate on this approach. FOPE offers an effective combination of anti-malware and antispam technologies to protect organizations from both known and unknown malicious software including heuristics scanning and block lists to maintain and achieve the 98% SLA. Some of our IP block lists are built to act proactively; they examine traffic history making judgments about whether communication is coming from a good sender or bad sender. Additionally, as patterns emerge to show which senders are responsible for a high volume of illegitimate messages, FOPE will automatically add the sending IP address(es) into the reputation block list so that all future messages from that particular IP are no longer accepted by the service’s global network.

We unfortunately had to cut out a lot of content to keep the running time manageable (including Alex’s Thanksgiving spam attack story and Terry’s detailed explanation on IP block lists and reputation) but let us know if you have questions and feedback for us and if you’d like to hear more about the Exchange team.

If you missed our previous video in the series, check out ESE Access to Exchange: Running Exchange Online.

Ann Vu

Note, the "ESE" in the post title is a wordplay on "easy". The post content does not have anything to do with the excellent Extensible Storage Engine (ESE) used in Microsoft Exchange and other products.

Released: Outlook Configuration Analyzer Tool (OCAT)

0
0

Last month we released the Outlook Configuration Analyzer Tool (OCAT) on the Microsoft Download Center site.

OCAT was developed by two Microsoft support engineers with over 30 years of combined experience in Outlook, Exchange and Office support. Based on their support experience, they compiled a set of detection rules that look for Outlook configurations that have historically been potential sources of problems in Outlook. The tool looks and feels like Microsoft Exchange Best Practices Analyzer (ExBPA) - the same infrastructure used by ExBPA was chosen for the development and final implementation of OCAT.

image
Figure 1: Microsoft Outlook Configuration Analyzer Tool (OCAT)

You can use OCAT to check Outlook configuration on your users' computers and look for known issues (for example, a PST file located on a network share). We recommend running it if you suspect a user's Outlook profile or configuration to be a part of the problem. You can also run the tool proactively to detect Outlook configuration issues. The tool allows you to:

  • Run a scan on your computer
  • Open a previously run scan on your computer
  • Import a scan from another computer
  • Use several reporting formats to view the scan results
  • Start the Exchange Remote Connectivity Analyzer tool
  • Send feedback to the OCAT team

We're working on an updated version of OCAT that includes new functionality such as automatic downloading of new detection rules, scanning calendar items (using code from the new CalCheck tool) and offline scanning for Outlook 2003 clients. Since OCAT utilizes MrMapi to collect a few configuration settings, we are also working with its developer (another Microsoft support engineer) to improve data collection capabilities in OCAT.

You can follow the OCAT team on Twitter to receive news of OCAT updates.

System requirements

Before you install OCAT, make sure that your computer meets the following OCAT system requirements:

  • Supported operating systems:
    • Windows 7
    • Windows Vista Service Pack 2
    • Windows XP Service Pack 3
  • OCAT requires Microsoft Outlook. The following versions of Outlook are supported:
    • Microsoft Office Outlook 2007
    • Microsoft Outlook 2010 (32-bit or 64-bit)
  • Microsoft .NET Framework Version 2.0 or higher
  • .NET Programmability Support (as part of your Microsoft Office installation)

Note  Outlook 2003 is not a supported version of Outlook with the OCAT tool. If you try to perform a scan on a client that has Outlook 2003 installed, you receive the following error message:

Error starting scan, please try again. If error persists, please send mail to ocatsupp @ microsoft DOT com.

You can also download a complete OCAT user guide from the download page. We highly recommend that you read this document before installing and using OCAT. See OCAT Supplemental Information.

OCAT Functionality overview

Here's an overview of the functionality provided by OCAT.

Generating an OCAT scan report

To generate an OCAT report for your Outlook profile, simply click Start a scan in the left panel.

Be aware that you must make sure that Outlook is running before you start an OCAT scan.

image
Figure 2: Starting an OCAT scan

If you can't keep Outlook running long enough to start an OCAT scan, you can still perform a basic scan. To do this, in the Task drop-down list, select Offline Scan and then click Start scanning.

image
Figure 3:Starting an offline scan

The report that an offline scan generates contains only information that's available on your computer, such as registry data, Application event log details, a list of installed updates and local file details. Although an offline scan doesn't contain as many profile details as an online scan, it may still provide enough information to help you resolve any problems that you are experiencing with Outlook.

Viewing your scan report

The report that OCAT generates can, in most cases, provide a lot of information about your Outlook profile and show you known problems in your profile with links to relevant Knowledge Base articles.

  • List Reports

    The List Reports view is the default presentation of your scan data.

    image

    In the List Reports view, there are up to three tabs that are available to view different snapshots of this data: 1) Informational Items 2) All Issues and 3) Critical Issues

  • Tree Reports

    The Tree Reports view of your scan report provides tree-control functionality to view your scan results.

    image

    In the tree report view, two tabs are available to view different snapshots of this data: 1) Detailed View and 2) Summary View

How to view a report that was created on another computer

You can view an OCAT scan report generated on another computer.

  1. Start OCAT on the user's machine.
  2. In the left panel, click Select a Configuration scan to view and then select the scan you want to view from the list of available scans. image
  3. Click Export this scan.
  4. In the Export this scan dialog box, specify a file name and a folder location.
  5. Copy the XML file that you saved in step 5 to the computer from which you want to view the report.
  6. On the computer to which you copied the file in step 6, start OCAT.
  7. On the Welcome page, click Select a Configuration scan to view.
  8. On the Select a Configuration scan to view page, click Import scan.
  9. Browse to the folder that contains the XML file that you copied in step 6, and then click Open.

The scan is opened automatically for viewing.

Send us your feedback

If you want to submit feedback or improvement suggestions for OCAT, click the feedback link in the See also section in the left panel of OCAT. The link opens a new email message addressed to OCATsupp.

Greg Mansius

MEC is Back!

0
0

In the late 90’s and first years of the 21st century, our team along with many of you were part of one of the most valuable technical education and community events in the industry. This event, focused entirely on Microsoft Exchange Server, brought together thousands of Exchange administrators, architects, consultants and partners with an abundance of the Exchange product group itself, hunkered down in a conference center to do nothing but soak in the goodness of Exchange.

Together, we shared deep insight about the latest product details and received a tailored education that helped all of you in the community move your infrastructures forward successfully and helped us on the Exchange team build a better product. Along the way, we had a pretty great time together, got to know each other and returned home better for the experience.

After a mysterious ten year hiatus, filled with spirited requests from the community at large, MEC IS BACK!

The premier event for deeply technical information on all things both Exchange Server and now Exchange Online and the best place to engage directly with the Exchange product group and your peers in the community is returning in 2012.

Visit MECisback.com today and in the coming weeks and months to get informed and stay informed about the details of how this conference will make its return. I will be back on EHLO periodically to tell you more. It is going to be epic!

Michael Atalla
Director, Exchange Product Management

Introducing: Log Parser Studio

0
0

Anyone who regularly uses Log Parser 2.2 knows just how useful and powerful it can be for obtaining valuable information from IIS (Internet Information Server) and other logs. In addition, adding the power of SQL allows explicit searching of gigabytes of logs returning only the data that is needed while filtering out the noise. The only thing missing is a great graphical user interface (GUI) to function as a front-end to Log Parser and a ‘Query Library’ in order to manage all those great queries and scripts that one builds up over time.

Log Parser Studio was created to fulfill this need; by allowing those who use Log Parser 2.2 (and even those who don’t due to lack of an interface) to work faster and more efficiently to get to the data they need with less “fiddling” with scripts and folders full of queries.

With Log Parser Studio (LPS for short) we can house all of our queries in a central location. We can edit and create new queries in the ‘Query Editor’ and save them for later. We can search for queries using free text search as well as export and import both libraries and queries in different formats allowing for easy collaboration as well as storing multiple types of separate libraries for different protocols.

Processing Logs for Exchange Protocols

We all know this very well: processing logs for different Exchange protocols is a time consuming task. In the absence of special purpose tools, it becomes a tedious task for an Exchange Administrator to sift thru those logs and process them using Log Parser (or some other tool), if output format is important. You also need expertise in writing those SQL queries. You can also use special purpose scripts that one can find on the web and then analyze the output to make some sense of out of those lengthy logs. Log Parser Studio is mainly designed for quick and easy processing of different logs for Exchange protocols. Once you launch it, you’ll notice tabs for different Exchange protocols, i.e. Microsoft Exchange ActiveSync (MAS), Exchange Web Services (EWS), Outlook Web App (OWA/HTTP) and others. Under those tabs there are tens of SQL queries written for specific purposes (description and other particulars of a query are also available in the main UI), which can be run by just one click!

Let’s get into the specifics of some of the cool features of Log Parser Studio

Query Library and Management

Upon launching LPS, the first thing you will see is the Query Library preloaded with queries. This is where we manage all of our queries. The library is always available by clicking on the Library tab. You can load a query for review or execution using several methods. The easiest method is to simply select the query in the list and double-click it. Upon doing so the query will auto-open in its own Query tab. The Query Library is home base for queries. All queries maintained by LPS are stored in this library. There are easy controls to quickly locate desired queries & mark them as favorites for quick access later.

image

Library Recovery

The initial library that ships with LPS is embedded in the application and created upon install. If you ever delete, corrupt or lose the library you can easily reset back to the original by using the recover library feature (Options | Recover Library). When recovering the library all existing queries will be deleted. If you have custom/modified queries that you do not want to lose, you should export those first, then after recovering the default set of queries, you can merge them back into LPS.

Import/Export

Depending on your need, the entire library or subsets of the library can be imported and exported either as the default LPS XML format or as SQL queries. For example, if you have a folder full of Log Parser SQL queries, you can import some or all of them into LPS’s library. Usually, the only thing you will need to do after the import is make a few adjustments. All LPS needs is the base SQL query and to swap out the filename references with ‘[LOGFILEPATH]’ and/or ‘[OUTFILEPATH]’ as discussed in detail in the PDF manual included with the tool (you can access it via LPS | Help | Documentation).

Queries

Remember that a well-written structured query makes all the difference between a successful query that returns the concise information you need vs. a subpar query which taxes your system, returns much more information than you actually need and in some cases crashes the application.

image

The art of creating great SQL/Log Parser queries is outside the scope of this post, however all of the queries included with LPS have been written to achieve the most concise results while returning the fewest records. Knowing what you want and how to get it with the least number of rows returned is the key!

Batch Jobs and Multithreading

You’ll find that LPS in combination with Log Parser 2.2 is a very powerful tool. However, if all you could do was run a single query at a time and wait for the results, you probably wouldn’t be making near as much progress as you could be. In lieu of this LPS contains both batch jobs and multithreaded queries.

A batch job is simply a collection of predefined queries that can all be executed with the press of a single button. From within the Batch Manager you can remove any single or all queries as well as execute them. You can also execute them by clicking the Run Multiple Queries button or the Execute button in the Batch Manager. Upon execution, LPS will prepare and execute each query in the batch. By default LPS will send ALL queries to Log Parser 2.2 as soon as each is prepared. This is where multithreading works in our favor. For example, if we have 50 queries setup as a batch job and execute the job, we’ll have 50 threads in the background all working with Log Parser simultaneously leaving the user free to work with other queries. As each job finishes the results are passed back to the grid or the CSV output based on the query type. Even in this scenario you can continue to work with other queries, search, modify and execute. As each query completes its thread is retired and its resources freed. These threads are managed very efficiently in the background so there should be no issue running multiple queries at once.

image

Now what if we did want the queries in the batch to run concurrently for performance or other reasons? This functionality is already built-into LPS’s options. Just make the change in LPS | Options | Preferences by checking the ‘Process Batch Queries in Sequence’ checkbox. When checked, the first query in the batch is executed and the next query will not begin until the first one is complete. This process will continue until the last query in the batch has been executed.

Automation

In conjunction with batch jobs, automation allows unattended scheduled automation of batch jobs. For example we can create a scheduled task that will automatically run a chosen batch job which also operates on a separate set of custom folders. This process requires two components, a folder list file (.FLD) and a batch list file (.XML). We create these ahead of time from within LPS. For more details on how to do that, please refer to the manual.

Charts

Many queries that return data to the Result Grid can be charted using the built-in charting feature. The basic requirements for charts are the same as Log Parser 2.2, i.e.

  1. The first column in the grid may be any data type (string, number etc.)
  2. The second column must be some type of number (Integer, Double, Decimal), Strings are not allowed

Keep the above requirements in mind when creating your own queries so that you will consciously write the query to include a number for column two. To generate a chart click the chart button after a query has completed. For #2 above, even if you forgot to do so, you can drag any numbered column and drop it in the second column after the fact. This way if you have multiple numbered columns, you can simply drag the one that you’re interested in, into second column and generate different charts from the same data. Again, for more details on charting feature, please refer to the manual.

image

Keyboard Shortcuts/Commands

There are multiple keyboard shortcuts built-in to LPS. You can view the list anytime while using LPS by clicking LPS | Help | Keyboard Shortcuts. The currently included shortcuts are as follows:

Shortcut What it does
CTRL+N Start a new query.
CTRL+S Save active query in library or query tab depending on which has focus.
CTRL+Q Open library window.
CTRL+B Add selected query in library to batch.
ALT+B Open Batch Manager.
CTRL+B Add the selected queries to batch.
CTRL+D Duplicates the current active query to a new tab.
CTRL+ALT+E Open the error log if one exists.
CTRL+E Export current selected query results to CSV.
ALT+F Add selected query in library to the favorites list.
CTRL+ALT+L Open the raw Library in the first available text editor.
CTRL+F5 Reload the Library from disk.
F5 Execute active query.
F2 Edit name/description of currently selected query in the Library.
F3 Display the list of IIS fields.

Supported Input and Output types

Log Parser 2.2 has the ability to query multiple types of logs. Since LPS is a work in progress, only the most used types are currently available. Additional input and output types will be added when possible in upcoming versions or updates.

Supported Input Types

Full support for W3SVC/IIS, CSV, HTTP Error and basic support for all built-in Log Parser 2.2 input formats. In addition, some custom written LPS formats such as Microsoft Exchange specific formats that are not available with the default Log Parser 2.2 install.

Supported Output Types

CSV and TXT are the currently supported output file types.

Log Parser Studio - Quick Start Guide

Want to skip all the details & just run some queries right now? Start here …

The very first thing Log Parser Studio needs to know is where the log files are, and the default location that you would like any queries that export their results as CSV files to be saved.

1. Setup your default CSV output path:

a. Go to LPS | Options | Preferences | Default Output Path.

b. Browse to and select the folder you would like to use for exported results.

c. Click Apply.

d. Any queries that export CSV files will now be saved in this folder.
NOTE: If you forget to set this path before you start the CSV files will be saved in %AppData%\Microsoft\Log Parser Studio by default but it is recommended that you move this to another location.

2. Tell LPS where the log files are by opening the Log File Manager. If you try to run a query before completing this step LPS will prompt and ask you to set the log path. Upon clicking OK on that prompt, you are presented with the Log File Manager. Click Add Folder to add a folder or Add File to add a single or multiple files. When adding a folder you still must select at least one file so LPS will know which type of log we are working with. When doing so, LPS will automatically turn this into a wildcard (*.xxx) Indicating that all matching logs in the folder will be searched.

You can easily tell which folder or files are currently being searched by examining the status bar at the bottom-right of Log Parser Studio. To see the full path, roll your mouse over the status bar.

NOTE: LPS and Log Parser handle multiple types of logs and objects that can be queried. It is important to remember that the type of log you are querying must match the query you are performing. In other words, when running a query that expects IIS logs, only IIS logs should be selected in the File Manager. Failure to do this (it’s easy to forget) will result errors or unexpected behavior will be returned when running the query.

3. Choose a query from the library and run it:

a. Click the Library tab if it isn’t already selected.

b. Choose a query in the list and double-click it. This will open the query in its own tab.

c. Click the Run Single Query button to execute the query

The query execution will begin in the background. Once the query has completed there are two possible outputs targets; the result grid in the top half of the query tab or a CSV file. Some queries return to the grid while other more memory intensive queries are saved to CSV.

As a general rule queries that may return very large result sets are probably best served going to a CSV file for further processing in Excel. Once you have the results there are many features for working with those results. For more details, please refer to the manual.

Have fun with Log Parser Studio! & always remember – There’s a query for that!

Kary Wall
Escalation Engineer
Microsoft Exchange Support


Exchange Client Network Bandwidth Calculator Beta 2

0
0

During the beta phase of the Exchange Client Network Bandwidth Calculator (ExNBC) I hope to release an update every 4-6 weeks. These updates are to provide new features based on feedback and potentially to fix things in the calculator.

For this Beta 2 release, I have been working with our teams in the Office 365 community to help make the calculator easier to use for customers planning for an Exchange Online deployment. The following changes have been made;

  • Corrected Outlook 2003 network latency requirements
  • Provided some Office 365 context help
    • Added Office 365 icon against recommended Office 365 clients
    • If Office 365 is selected on the input page
      • Availability protocol is highlighted if configured incorrectly for Office 365
      • OWA 2007 removed from client list
      • Outlook 2003 removed from client list
      • Non-Outlook Anywhere clients removed from list

tst

Note: Outlook Anywhere in Online mode was not a scenario that was envisaged when the calculator data was created, so although strictly speaking it's a supported configuration we have no accurate way of predicting network bandwidth for that configuration at the present time. I'll take a decision on if we will address this scenario at a later point in time – if you're working on a project that would benefit from this, please let me know via the netcalc@microsoft.com address.

Beta 3 is planned for the first week in April. It'll include support for Outlook for Mac 2011.

Please continue to provide your valuable feedback - both positive and negative, to the netcalc@microsoft.com address. We love to read your comments!

Neil Johnson
Senior Consultant, MCS UK

FOPE: New updates for managing policy rules

0
0
Forefront Online Protection for Exchange (FOPE) customers, we’re listening to your feedback and queries about FOPE — and we’ve got new documentation updates for you. Please check out the improved Create, Edit, or Delete a Policy Rule in the TechNet Library.

In addition to spam and virus filtering, the FOPE administration center policy rules let you enforce specific company regulations and policies by configuring customizable filtering rules. As a direct response to your feedback about the policy rule information that we’ve published, we updated our guidance to clarify the administrator rights that are necessary in order to make changes to FOPE policy rules. Previously it wasn’t clear that the ability to view and change policies depends upon the access permissions of the logged-in user. You can see the updated information at http://aka.ms/fope/manage-policy.

Share your experience or get more help

If you have your own experiences to contribute, we strongly encourage you to edit or post your own article in the public TechNet wiki. Check out the FOPE FAQ just to get started and look for the Post an article link.

If you need more help, there are other resources too. You can find related information in the links at the end of the policy rule topic that was just updated or, for assisted technical support, you can ask the community in the FOPE forum or contact Microsoft as noted in FOPE Support Information. You can always send us more feedback by using the rating system at the top of every TechNet Library page and Twitter users can follow us @FOPE_UA.

John Andrilla
Forefront Technical Writer

Check out Microsoft Script Explorer for Windows PowerShell (pre-release)

0
0

Wanted to write a quick post about a tool that can help you find and catalogue various PowerShell scripts that are scattered on various online communities or – possibly – your internal company network shares.

The tool is called Microsoft Script Explorer for Windows PowerShell and has entered the public Beta 1 stage now.

Just to give you a taste of how it looks…

Screenshot: Microsoft Script Explorer for Windows PowerShell

You can search scripts by category, with keywords or various other options. You can also dive directly into categories, which allows you to see scripts by product.

Have fun with it!

Nino Bilic

Microsoft Exchange on Twitter: A new hash tag for Exchange

0
0

If you've been following the #Exchange hash tag on Twitter, you may have noticed the increasing spammyness of this tag - from tweets about stocks & stock exchanges to cultural exchanges and everything in between. We've also heard from many of you who follow us on Twitter, and noticed the recent spate of inappropriate or offensive tweets that include this tag.

What's a hash tag?

A hash tag is a keyword or topic marked with the # symbol in a Tweet. It's used to categorize tweets about a topic. Clicking on a hash tag shows tweets that include the keyword. More about hash tags on Twitter.

Starting today, we're moving to a new hash tag - #MSExchange. Although we can't guarantee this tag will be totally spam-free, we're hoping you'll get more targeted tweets when searching for or following this tag. If you tweet about Microsoft Exchange or related topic, please use #MSExchange to tag your tweet. For example:

VIDEO: The Updated Exchange Deployment Assistant for Exchange 2010 SP2 & Exchange Online Hybrid - http://aka.ms/k5udjq #MSExchange #tools

If you're on Twitter, we welcome you to follow us @MSFTExchange for the latest on Microsoft Exchange, including post updates from EHLO.

Bharat Suneja

Exchange Server Deployment Assistant Update for Exchange 2010 SP2 and Office 365 Hybrid Deployments

0
0

We're happy to announce that the Exchange Server Deployment Assistant (ExDeploy) has been enhanced to include support for configuring hybrid deployments using Exchange 2010 SP2 and the Hybrid Configuration Wizard.

The first in several upcoming scenario additions for configuring hybrid deployments when using the Hybrid Configuration Wizard, this new scenario is for Exchange 2003 organizations interested in maintaining some users on-premises and some users hosted in the cloud by Microsoft Office 365 for enterprises. Although limited, interim hybrid deployment configuration support for Exchange 2007 and 2010 on-premises deployments is also included with this update, complete hybrid deployment checklists for the Exchange 2007 and 2010 on-premises scenarios are in progress and will be released soon. Watch this space for announcements about upcoming Exchange 2007 and 2010 hybrid deployment scenario updates.

The new hybrid information for Exchange 2003 environments is only available in English at this time and requires that you add Exchange 2010 SP2 servers to your current Exchange 2003 organization. If you have previously configured a hybrid deployment using the Deployment Assistant and Exchange 2010 SP1 and still need guidance; don’t worry, we haven’t forgotten about you! Previous Deployment Assistant checklists for configuring hybrid deployments with Exchange 2010 SP1 are now located here for your convenience.

Hybrid deployments offer organizations the ability to extend the feature-rich experience and administrative control they have with their existing on-premises Microsoft Exchange organization to the cloud. It provides the seamless look and feel of a single Exchange organization between an on-premises organization and an Exchange Online organization. In addition, hybrid deployments can serve as an intermediate step to moving completely to a cloud-based Exchange Online organization. This approach is different than the simple Exchange migration (“cutover migration”) and staged Exchange migration options currently offered by Office 365 outlined here.

About the Exchange Server Deployment Assistant

The Exchange Server Deployment Assistant (ExDeploy) is a web-based tool that helps you upgrade to Exchange 2010 on-premises, configure a hybrid deployment between an on-premises and Exchange Online organization or migrate to Exchange Online.

Screenshot: Exchange Deployment Assistant home page
Figure 1:The Exchange Deployment Assistant generates customized instructions to help you upgrade to Exchange 2010 on-premises or in the cloud

It asks you a small set of simple questions, and then based on your answers, it provides a checklist with instructions to deploy or configure Exchange 2010 that are customized to your environment. These environments include:

  • Stand-alone on-premises Exchange installations and upgrades
  • Hybrid deployment configurations and
  • Cloud-only Exchange deployment scenarios.

Besides getting the checklist online, you can also print instructions for individual tasks and download a PDF file of your complete configuration checklist.

Your feedback is very important for the continued improvement of this tool. We would love your feedback on this new scenario and any other area of the Deployment Assistant. Feel free to either post comments on this blog post, provide feedback in the Office 365 community Exchange Online migration and hybrid deployment forum, or send an email to edafdbk@microsoft.com via the Feedback link located in the header of every page of the Deployment Assistant.

Exchange Deployment Assistant Team

Viewing all 607 articles
Browse latest View live




Latest Images