Tuesday, May 23, 2017

An error occurred while running detection.

I was applying a cumulative update to my server farm and ran into this error as soon as I started the installation

“An error occurred while running detection.”
Hmm…that seems weird. I tried install prerequisite files and I still got the same error.
I checked Central Admin to see which version SharePoint was running on since I figured I might be trying to install the wrong version. Under “Check Product and patch installation status”, to my horror, I saw this:

I decided to turn to Google for help. After a few searches on “An error occurred during detection”, I happened upon a great blog post that told me my c:\windows\installer folder might be corrupted or missing files. I checked my Central Admin server and sure enough, it was missing most of its .msi and .msp files!
I found a .vbs script on this site: https://kurteichler.wordpress.com/2013/09/11/restore-sharepoint-2010-installer-directory-cache-files/
and used it to restore my missing files.
In order to do so, you must
  1. Copy the c:\windows\installer folder from another working SharePoint server into a local c:\temp\ folder onto your bad server.
  2. Copy the script above into a file. I suggest using a text editor like Notepad++ instead of regular notepad. I tried copying the code into Notepad and I got errors because Notepad reads certain characters as ASCII characters so when you try to run it as a .vbs script, it returns error: unknown characters. Or you can download it from me: OpUtil.txt. Open it, look at it if you want, but be sure to rename it as a .vbs file instead of a .txt. Also put this script into your c:\temp\ folder for organization.
  3. Open up command prompt on your bad server. Navigate to c:\windows\installer and run the following command:
    • runas /user:DOMAIN\Admin_Account “cscript.exe c:\temp\OpUtil.vbs /RepairCache /srestorelocation=c:\temp\”
  4. Open up a PowerShell window and run Get-SPProduct -local to update the farm’s configuration database.
  5. Run psconfig.exe to update everything and now you should be ready to install your updates.

Thursday, March 9, 2017

SharePoint Running Slow? Here’s How We Solved the Mystery!

SharePoint 2010 remains among the most popular document management system for high traffic sites and Fortune 500 corporate environments with large volumes of sensitive documents. Sometimes, managing all those documents gets to be too much, even for SharePoint and document searches can slow to a frustrating crawl. Recently, we solved a fascinating puzzle for one of our corporate clients that began to lose significant of productivity due to slow searches. We invite you to match wits with our best engineers and coders to find out how we solved The Case of the Reluctant SharePoint. As a bonus, note that each section is named for a classic film of mystery and suspense. Score one point for each one that you’ve seen.

The Usual Suspects

Our client was very happy when we deployed SharePoint 2010 across their corporate network. After a couple of migrations, however, end users started complaining on load times when viewing different pages across the SharePoint portal. We led off our investigation by identifying the following areas of concern as possible culprits:

• Custom .Net Code (webparts, workflows, event handlers, etc.)
• SharePoint Server farm architecture
• Database

Code reviews did not provide any leads. Back to the drawing board.

The 39 Steps (Actually, it only took us 3)

For our next line of inquiry, the server architecture & database were verified to identify bottlenecks that could cause the reduced performance.

Using PERFMON in database server (for CPU & memory usage), we found that the memory consumption was close to 95%. Something was giving Sharepoint a bad headache. We went on to investigate potential memory bottlenecks, starting by checking the buffer memory usage with this SQL query:


Here we found a major clue. This resulting table suggested that the Search Application database was consuming most of the buffer memory.

Operation Bottleneck


What process within the Search Application that was taking up all the buffer? On analyzing & reviewing the Search Application, we made the following observations:

• Incremental crawl frequency (5 minutes) was less lower than the actual incremental crawl time (>30 minutes), leading to a stack of crawls.
• WFE’s were serving User and Crawler HTTP requests.
• Users reported better performance when the incremental crawl was stopped or when incremental crawl frequency was increased to 3 hours, obviously giving the buffer time to clear out.

We implemented the following changes in the Search Application architecture and immediately saw a drastic improvement in overall site performance:

1. Remove one of the servers from load balancing
• Load balanced URL is http://constoso.com/
• Load balanced servers are WFE1, WFE2 & WFE 3

In this ex. WFE 3 is removed from the load balanced solution

2. Keep the “Microsoft SharePoint Foundation Web Application” service running on this server (WFE 3) so that it can serve the HTTP requests

3. Update the Alternate Access Mapping to ensure that the site can be accessed using server name or IP address (http://WFE 3/)

4. Perform these steps redirect crawler traffic to a dedicated front-end web server:

At the Windows PowerShell command prompt, run the script in the following example:

$listOfUri = new-object System.Collections.Generic.List[System.Uri](1)
$zoneUrl = [Microsoft.SharePoint.Administration.SPUrlZone]’Default’
$webAppUrl = “http://constoso.com/” //””
$webApp = Get-SPWebApplication -Identity $webAppUrl
$webApp.SiteDataServers.Remove($zoneUrl) ## By default this has no items to remove
$URLOfDedicatedMachine = New-Object System.Uri(“(http://WFE 3)”) // “)
$listOfUri.Add($URLOfDedicatedMachine);
$webApp.SiteDataServers.Add($zoneUrl, $listOfUri);
$WebApp.Update()

Verify that the front-end web server is configured for crawling by running the following script at the Windows PowerShell command prompt:

$WebApplication=Get-SPWebApplication
$listOfUri = $WebApplication | fl SiteDataServers
Echo “$listOfUri”

If this returns any values, the web application uses a dedicated front-end web server.

When a front-end web server is dedicated for search crawls, run the below script to remove the throttling configuration that would otherwise limit the load the server accepts from requests and services.

$svc=[Microsoft.SharePoint.Administration.SPWebServiceInstance]::LocalContent;
$svc.DisableLocalHttpThrottling=$true;
$svc.Update()

The Golden Spider

Our final step was to reduce the incremental crawl frequency from 3 hours down to 1 hour to retain update relevancy but prevent stacking. How did you do? Did you figure out the mystery before we did? If you’ve had to deal with similar SharePoint problems, we’d love to hear your story. If you have a tough mystery for us to crack, our private investigators are standing by.

Search Functionality not working on Specific Domain

Symptoms

Follow these steps if you are running into an issue with SharePoint Search, where you are unable to perform a Search Query on One-Way trust Domains, when you have a user from the trusted domain performing the query and a SSA Application Pool account from the Trustee Domain.

Farm Topology:
•        DomainA and DomainB are in two separate forests with a one way trust relationship from DomainA to DomainB.
•        User (DomainB\User1) has access to content crawled on DomainA.

DomainB\User1 is returned zero results when he or she issues a search query on DomainA.

Cause

Security trimming is done in the query processor(QP). In SharePoint 2010, the QP has moved from the WFE to the query servers.
Since the WFE only sends the user’s SID to the QP, AuthZ API fails to authenticate across domains.
In SharePoint 2007, security trimming was done in the WFE. The AuthZ API worked as the querying user’s group information was available.

Resolution

Run the following Windows PowerShell command:

$searchapp.SetProperty("ForceClaimACLs",1)

Where $searchapp is the Windows PowerShell object for the search service application to be modified. ($searchapp = Get-SPEnterpriseSearchServiceApplication)

You will not see any confirmation, the SetProperty() command sets the value for ForceClaimACLs in the search administration database to 1.

A full crawl is required to enable the new ACL format across the content.

NOTE: Search alerts will be broken after enabling this functionality.

Work Around: Use two way trust instead of one way.

More Information

Steps to reproduce: 

1) Create a one way trust domains configuration where Domain A trusts Domain B (but not vice-versa)
2) Install SharePoint 2010 on Domain A and configure the SSA to run with a service account on domain A
3) Create a web application by using windows classic or Windows claims
4) Create some content in SharePoint
5) Give the same right to the SharePoint content to a user from Domain A and a user from Domain B
6) Perform a full crawl
7) Try to do a query by using a user from Domain A
8) Try to do a query by using a user from Domain B

EXPECTED Behavior

Both users are seeing the same results in the search result page.

CURRENT
Behavior

User from Domain A gets the right content but user from Domain B only gets:
a) Content that has been ACLed where the ACL size is greater than 64k (Windows Classic)
b) All the SharePoint content (Windows Claims)