Posts Tagged ‘SharePoint’

CAML Queries are most commonly used  for easily accessing  data inside SharePoint.

SharePoint object model provides many useful Classes which use CAML queries for fetching the data you need (listed below). These classes are fine tuned for specific scenarios.

  1. SPQuery
  2. ContentIterator
  3. SPSiteDataQuery
  4. PortalSiteMapProvider
  5. CrossListQueryCache and CrossListQueryInfo

Lets explore each of the above and see which how they can be best used depending upon requirements.


SPQuery is probably most popular among SharePoint developers. Using Query class , we can execute a CAML query against an SPList instance, to retrieve items corresponding to the query.

using (SPSite site = new SPSite("http://spsite")) {
 using (SPWeb web = site.OpenWeb()) {
 SPList list = web.Lists["Contacts"];
 SPQuery query = new SPQuery();

 // Define columns to fetch
 query.ViewFields = "<FieldRef Name=\"Title\" /><FieldRef Name=\"Email\" />";

 // Force to fetch only the specified columns
 query.ViewFieldsOnly = true; 

 query.Query = "<Where><Contains><FieldRef Name=\"Email\" />
 <Value Type=\"Text\"></Value></Contains></Where>";

 //Define the maximum number of results for each page (like a SELECT TOP)
 query.RowLimit = 10;

 // Query for items
 SPListItemCollection items = list.GetItems(query);
 foreach (SPListItem item in items) {
 Console.WriteLine(item["Title"] +" : "+item["E-mail Address"]);

SPQuery can be used within any kind of application that query SharePoint data : windows application or web application. It is the best way to query a single list when items returned by query are frequently changing. And also when you need real time data in your results.

Below are few points you should take into consideration for achieving best performance out of SPQuery

  • Always bound SPQuery object using RowLimit.  An SPQuery without RowLimit has poor performance and may fail on large lists. You should specify a RowLimit between 1 and 2000. You can also usePaging in SPQuery   if you want to retreive more then 2000 items at a time .
  • Avoid Query Throttle exceptions. The maximum number of items you should retrieve in a single query or view in order to avoid performance degradation is 5000 – query threshold. If your query returns more items than the configured query threshold , the query will be blocked and you will not get results.
  • Use indexed fields in query where possible . If you are querying  a field that is not indexed, and the resulting scan encounters more than 5000 items(query threshold), your query not get any results.

SharePoint Server 2010 provides a new class named ContentIterator that you can use to query lists without hitting throttle limits and hence can avoid receiving an SPQueryThrottleException. You should consider using ContentIterator if you need to run a query that will return more than 5,000 rows of data.

The ContentIterator object divides the list items into chunks and runs the query against one chunk of list data at a time. Each list item is processed asynchronously by a callback method until the query is complete.

The following  example demonstrates usage of the ContentIterator class.

    static int noOfErrors = 0;
    static int noOfItemsProcessed = 0;

    string camlQuery = @"<View><Query><Where>              
                        <FieldRef Name='Title' />                       

    ContentIterator iterator = new ContentIterator();
    SPQuery listQuery = new SPQuery();
    listQuery.Query = query1;
    SPList list = SPContext.Current.Web.Lists["Tasks"];

public  bool ProcessError(SPListItem item, Exception e) 
    // process the error
    return true; 
public void ProcessItem(SPListItem item)
    //process the item.

ContentIterator will iterate througth each item in the list and  invoke the callback methodProcessItem specified to process list items . If an error occurs while iterating the list,  theProcessError function is invoked.

To efficiently use ContentIteratoryou should include one of the three OrderBy clauses—

  • ContentIterator.ItemEnumerationOrderByID : Gets an OrderBy clause for a query that orders items by ID.
  • ContentIterator.ItemEnumerationOrderByPath : Gets an OrderBy clause that orders query items by URL.
  •  ContentIterator.ItemEnumerationOrderByNVPField : Gets an OrderBy clause for a query that orders items by the NVP index used in the <Where > clauseIt  actually enables the index to be used.

By default,  SharePoint adds a OrderBy clause that orders by content type, which ensures that folders are processed before list items. You should override this behavior with one of the three OrderBy clauses to take full advantage of  indexed fields.

The following code example shows how to use the ContentIterator.ItemEnumerationOrderByNVPField clause. The example assumes that you are querying an indexed field.

SPQuery query = new SPQuery();
query.Query = "<Where><Eq><FieldRef Name=\"IndexedFieldName\"/><Value Type=\"Text\">Sharepoint</Value></Eq></Where>" 
+ ContentIterator.ItemEnumerationOrderByNVPField;
ContentIterator contentIterator = new ContentIterator();
    delegate(SPListItem item)
        // Work on each item.
    delegate(SPListItem item, Exception e)
        // Handle an exception that was thrown while iterating.
        // Return true so that ContentIterator rethrows the exception.
        return true;

You can use SPSiteDataQuery  when you want to query multiple lists within a site collection simultaneously. Like the SPQuery object, the SPSiteDataQuery object has Query and ViewFields properties. In addition, the SPSiteDataQuery object also has Lists and Websproperties. Below example uses SPSiteDataQuery to  return all events from all calendar lists in the current site collection where the end date is later than today.

SPSiteDataQuery query = new SPSiteDataQuery;
query.Query = "<Where><Gt><FieldRef Name='EndDate'/><Value Type='DateTime'><Today OffsetDays=\"-1\"/></Value></Gt></Where>";
//Sets the list types to search.106 is for calendar list.
query.Lists = "<Lists ServerTemplate='106' />";
//Sets the Fields to include in results
query.ViewFields = "<FieldRef Name='Title' /><FieldRef Name='Location' />";
//Sets the scope of the query
query.Webs = @"<Webs Scope='SiteCollection' />";
//Define the maximum number of results for each page (like a SELECT TOP)
query.RowLimit = 10;
//Execute the query
DataTable table = SPContext.Current.Site.RootWeb.GetSiteData(query);

The Lists property is used to specify the lists that should be included in the query within the site collection. The Lists property can take several forms to specify the lists to include in the query.:

  • Setting Lists property to <Lists ServerTemplate=[value]/> limits the query to lists of a certain server template. For example, type 106 is a calendar. By default, this attribute is null and so the query is not limited to lists based on a particular template
  • Setting Lists property to <Lists BaseType=[value]/> limits the query to lists of a certain BaseType.By default, the query considers lists of BaseType 0 (generic lists)
  • Setting Lists property to <Lists Hidden=’true’/> includes hidden lists in the query. By default, the query considers all non-hidden lists.
  • Setting Lists property to <Lists MaxListLimit=[value]/> limits the query to considering no more than the specified number of lists. If the query exceed the limit, it fails with SPException. By default, the limit is 1000. When set to 0, there is no limit to the number of lists that are considered (You should avoid setting limit to 0)
  • You can also instruct to include specific lists only by using Lists property.For example, to include only 2 specific lists to search, use <Lists><List ID=”[list1GUID]” /><List ID=”[list2GUID]” /></Lists>.The ID attribute identifies each list.

The Webs property is used to set the scope of the query

  • Setting Webs property to  <Webs Scope=’SiteCollection’/>   includes all lists in the site collection.
  • Setting Webs property to  <Webs Scope=’Recursive’/>  includes only the lists in the current site or subsites beneath the current site.
A few important points to note about SPSiteDataQuery:
  • Like SPQuery, SPSiteDataQuery also throws exception when results exceeds the no of  items allowed by the MaxItemsPerThrottledOperation or the MaxItemsPerThrottledOperationOverride property of SPWebApplication. So, you should setRowLimit property for optimum performance and to avoid the throttle exceptions.
  • SPSiteDataQuery  does not consider indexed columns and so using index columns in query have no positive effect on performance. This behavior differs from SPQuery which considers indexed column values and can achieve better performance.

PortalSiteMapProvider is the navigation site map provider for SharePoint. The main purpose of PortalSiteMapProvider class is to help to cache the content for navigation.

Additionally, it is useful for aggregating data as it provides cached queries and access to cached object stores.PortalSiteMapProvider also offers efficient management of caching infrastructure for retrieving list data

We can use PortalSiteMapProvider.GetCachedListItemsByQuery method to query the list and also cache the query results.  The PortalSiteMapProvider.GetCachedListItemsByQuery method first checks cache to see if those items already exist. If they exist, the method returns the cached results. If not, it queries the list, stores the results in cache and then returns them.

You can use below example method in a webpart or user control

protected void TestPortalSiteMapProvider(HtmlTextWriter writer)
 SPWeb web = SPContext.Current.Web;
 SPQuery spquery = new SPQuery();
 spquery.Query = "<Where><IsNotNull><FieldRef Name='Title'/></IsNotNull></Where>";
 PortalSiteMapProvider provider = PortalSiteMapProvider.WebSiteMapProvider;
 PortalWebSiteMapNode node = (PortalWebSiteMapNode)provider.FindSiteMapNode(web.ServerRelativeUrl);
 SiteMapNodeCollection nodeCollec = provider.GetCachedListItemsByQuery(node, "Tasks", spquery, web);
 foreach (SiteMapNode smnode in nodeCollec)

It should be noted that PortalSiteMapProvider requires HTTPContext (SPContext)to work. So you cannot use it in the scenarios where HTTPContext is null, for example: Console\Windows applications, Timer Jobs etc.

The main advantage of using PortalMapSiteProvider is that it exploits SharePoint object cache and hence provides efficient data access.

Apart from querying list items, PortalSiteMapProvider can also be used aggregate information  for sites, property bags etc. The below example demonstrates how to use the PortalSiteMapProvider to retrieve the site property bag values for a specific key in the site collection.  Since this information does not reside in a list, neither the SPQuery nor the SPSiteDataQuery can easily retrieve the information in this case.

protected void GetPropertyBagValues()
 PortalSiteMapProvider provider = PortalSiteMapProvider.CombinedNavSiteMapProvider;
 NameValueCollection sitenameAndImageUrl = new NameValueCollection();
 if (provider.CurrentNode != null && provider.CurrentNode.HasChildNodes)
 foreach (PortalWebSiteMapNode node in provider.GetChildNodes(provider.CurrentNode))
 //Retrieves value from the site property bag by using a specified object key. For e.g. SiteImageUrl
 if (node.GetProperty("SiteImageUrl") != null)
 sitenameAndImageUrl.Add(node.Title, (string)node.GetProperty("SiteImageUrl"));
 } } } }

Using PortalSiteMapProvider is one of the best performing data access technique . However, you should be aware of certain limitations:

  • It cannot be used in windows  or console application since HTTPContext is null in these applications
  • It can only be use with Publishing sites in  Sharepoint Server and not with SharePoint foundation.
  • PortalSiteMapProvider is  most useful if the data you are retrieving is not significantly changing over time. If you are trying to frequently retrieve different list items or data, the PortalSiteMapProvider will constantly read from the database, insert data into the cache and then return data. So, if this is the case, we don’t benefit from PortalSiteMapProvider as we are not reading from cache. Due to additional tasks it performs(checking and inserting into cache) , We also have performance loss if we use PortalSiteMapProvider for such situations.
  • PortalSiteMapProvider uses the site collection object cache to store data. By default, the object cache is limited to 100 megabytes (MB). Hence, the amount of memory PortalSiteMapProvider can use may be limited.

[Note:  You can increase the size of the site collection object cache from the Object cache settings page in the site collection. However, we should note that the amount of memory assigned to the object cache comes out of the shared memory available to the application pool.  Therefore, you should carefully increase the limit after ensuring you have that much memory to consume. Check this out for details ]

CrossListQueryCache and CrossListQueryInfo

CrossListQueryCache and CrossListQueryInfo provide very scalable way to run cross–site queries like SPSiteDataQuery. Unlike  SPSiteDataQuery, CrossListQueryCache .GetSiteData()  uses cache and hence have better performance – if you use correct overloaded version of the method.

The CrossListQueryInfo object uses the CrossListQueryInfo object to get the cached results or, if there are no cached results available, it performs a cross-list query to the database and then caches the results for future use. Audience targeting is then applied to the result set, depending on the setting specified in the CrossListQueryInfo object. You can use the CbqQueryCacheobject to obtain a CrossListQueryInfo object for a specific Content by Query Web Part.

Overloaded methods Description   Uses Cache
GetSiteData(SPSite) Retrieves the cached data that is based on the CrossListQueryInfo specification.  YES
GetSiteData(SPWeb) Retrieves the data from the SPWeb object.  NO
GetSiteData(SPSite, String)
Retrieves the cached data from the SPSite 
and from the specified web url.
GetSiteData(SPWeb, SPSiteDataQuery) Retrieves the data by using the specified SPSiteDataQuery object.  NO

If you don’t use CrossListQueryCache .GetSiteData() version that supports caching then better to go for SPSiteDataQuery instead. Below is an example that uses cached version :

protected DataTable TestCrossListQueryCache()
DataTable dt = null;
CrossListQueryInfo crossListQueryInfo = new CrossListQueryInfo();
crossListQueryInfo.Query = "<Where><IsNotNull><FieldRef Name='Title'/></IsNotNull></Where>";
crossListQueryInfo.ViewFields = "<FieldRef Name=\"Title\" />";
crossListQueryInfo.Lists = "<Lists BaseType=\"0\" />";
crossListQueryInfo.Webs = "<Webs Scope=\"SiteCollection\" />";
crossListQueryInfo.UseCache = true;
CrossListQueryCache crossListQueryCache = new CrossListQueryCache(crossListQueryInfo);
dt = crossListQueryCache.GetSiteData(SPContext.Current.Site);
return dt;

Like PortalSiteMapProvider,CrossListQueryCache and CrossListQueryInfo also need HTTPContext to work. Hence they cannot be used in windows or console applications. For their usage, you should consider same points mentioned above for PortalSiteMapProvider.

KeywordQuery Class
In order to develop custom search web parts or applications that support ‘search-by-keyword’ scenario, SharePoint Query object model exposesKeywordQuery Class.

PortalSiteMapProvider gives you the power of SPQuery along with cache for better performance. CrossListQueryCache gives you the power of SPSiteDataQuery along with cache for better performance. Now, to use cache or not depends on the kind of data you want to  query. If your query returns frequently changing  Data-Sets then caching is actually a overhead and can in turn cause performance hit.

ContentInterator is good to use only when you want to process >5000 items which is the default query threshold limit.

  • GeoLocation
  • Related Items (You will find this column in Task list)


Integrating location and map features SharePoint 2013 introducing new column type “geolocation”; New column GEO location added help to specify the location using BING map. By default this column not visible in the UI. You can add this column by code also available in MSDN link
Above link shows how to create the geolocation using the Client object model.

 private static void AddGeolocationField()         {             // change “http:// localhost” with your sharepoint server name             ClientContext context = new ClientContext(@”http://Mysite&#8221;);             //Replace the <List Title> with valid list name.              List oList = context.Web.Lists.GetByTitle(“GeoLocationlist”);
//DisplayName will be displayed as the name of the newly added Location field on the list             oList.Fields.AddFieldAsXml(“<Field Type=’Geolocation’ DisplayName=’Location’/>”                                         , true, AddFieldOptions.AddToAllContentTypes);             oList.Update();              context.ExecuteQuery();         }

Creating a view from that list (another new view added in SharePoint 2013 called “Map view”)
Map View
MAp view
After creating the view you will see the below screen
Related Items
Related items column you will find inside the task list.

Related Items is hidden column. It is not available in the site columns. To add the columns in your custom list you need to change the column hidden property. Please read this blog to “Add the related items to the site column”.
“Related items” column not available in the new item form.
 Related Items


But when you click on the view item from all items then you will able to see the button Add Related items.


View item form



A modal pop up came up suggesting me to choose something related to link to page related to the site collection (any Asset).  You can add the multiple items to that filed but the related items window not allows you to select more than one items at a single time.
I have select the home page as a related item.

AddRelatedItems button

Some of the other columns

  • ParentID
  • AppAuthor
  • AppEditor
  • NoCrawl
  • PrincipalCount
  • Checkmark
  • RelatedLinks
  • MUILanguages
  • ContentLanguages
  • UserInfoHidden
  • IsFeatured
  • DisplayTemplateJSTemplateHidden
  • DisplayTemplateJSTargetControlType
  • DisplayTemplateJSIconUrl
  • DisplayTemplateJSTemplateType
  • DisplayTemplateJSTargetScope
  • DisplayTemplateJSTargetListTemplate
  • DisplayTemplateJSTargetContentType
  • DisplayTemplateJSConfigurationUrl
  • DefaultCssFile
  • RelatedItems
  • PreviouslyAssignedTo

SharePoint 2013 Minimal Download Strategy (MSD) new feature introduced by Microsoft that improving end user experience. I recently check the master page and find some interesting tags that used most at the top of the placeholders.i’e SharePoint:AjaxDelta Tag.

SharePoint:AjaxDelta wrapping all our favorite Delegate Controls and other elements in the Master Page. What this AjaxDelta control does is to only download to the client (browser) what has been changed since the previous download. If nothing has changed, nothing is downloaded Ajax Style.
Following are the AjaxDelta control that are available in the Master page.

  1. DeltaPlaceHolderAdditionalPageHead
  2. DeltaSPWebPartManager
  3. DeltaSuiteLinks
  4. DeltaSuiteBarRight
  5. DeltaSPNavigation
  6. DeltaWebPartAdderUpdatePanelContainer
  7. DeltaWebPartAdderUpdatePanelContainer
  8. DeltaTopNavigation
  9. DeltaSearch
  10. DeltaPlaceHolderPageTitleInTitleArea
  11. DeltaPlaceHolderPageDescription
  12. DeltaPlaceHolderLeftNavBar
  13. DeltaPlaceHolderMain
  14. DeltaFormDigest
  15. DeltaPlaceHolderUtilityContent
when you click on any of the page you will see the start.aspx page  and following by # then your site page URL. example http://xxx/_layouts/15/start.aspx#/SitePages/Home.aspx. The start.aspx page is responsible render the Delta changes control.

The Minimal Download Strategy is by default enabled on Team sites, Community sites but MDS is not enabled on publishing sites. MDS is a web feature.

What is Delegate control

  1. Delegate controls available in a SharePoint allow branding or substitution of common elements without altering master page.
  2. Substitute registered with control with lowest sequence number based on control ID.(Microsoft – here the default value is 100 so your sequence must be lower than this for your control to be loaded instead.)
  3. Parameters can be passed to the control via the declaration

Type of Delegate

  1. Multi Delegate (Delegate Control load more than one user/server control.)
  2. Single Delegate

Multi Delegate: If delegate contains the “AllowMultipleControls=”true” ” attribute in the markup; It means the control is multi delegate control. it loads more than one user/server control, that has registered as lowest sequence number.

Single Delegate: If delegate without “AllowMultipleControls=”true” ” attribute in the markup; It means the controls only replace with the user control; that has registered as lowest sequence number.

HTML Example of Delegate control

<SharePoint:DelegateControl id=”ID_SuiteLinksDelegate” ControlId=”SuiteLinksDelegate” runat=”server” />

Following are the list of Delegate Controls available in SharePoint 2010, Highlighted are newly added in SharePoint 2013.

  1. AdditionalPageHead
  2. GlobalSiteLink0
  3. GlobalSiteLink2
  4. GlobalSiteLink3
  5. PublishingConsole
  6. PageHeader
  7. TopNavigationDataSource
  8. TreeViewAndDataSource
  9. PageFooter
  10. QuickLaunchDataSource
  11. SmallSearchInputBox
  12. GlobalNavigation
  13. SuiteBarBrandingDelegate (2013)
  14. SuiteLinksDelegate(2013)
  15. PromotedActions(2013)

I have created the solution that implemented SuiteBarBrandingDelegate, SuiteLinksDelegate and PromotedActions.

SuiteBarBrandingDelegate: It will change the top left bar. Here in my example I replace “SharePoint” with “My SharePoint Site”.

This is the HTML of the delegate control.

Replace this with 

SuiteLinksDelegate: Replace the left top links bar with custom links. As showing in the below images. Replace “Newsfeed, SkyDrive, Sites” with “About Us, Contact Us, Feedback” links.

Master Page SuiteLinksDelegate  HTML markup.

Replace with  with Image.
PromotedActions:  This is multi delegate control. Added link between “Share” and “Follow” links ie “Facebook” link. As showing below.
This is the HTML of the delegate control.

Replace this  with 

Feature Element file

Header before

After apply delegate control

Visual Studio Project

Most of the developer even don’t know the power of delegate control.

There are the different master pages available in the SharePoint 2013, so it could be possible you would not found all the above delegate control in master page. ex. “v4.master” page not containing the “PromotedActions” delegate control. This is only available in “oslo.master” page.

In SharePoint 2013 SharePoint:AjaxDelta wrapping all our favorite Delegate Controls and other elements in the Master Page. So now delegate control comes under the “SharePoint 2013 Minimal Download Strategy (MSD)” 

Attached code with this Post.

Following are the limitation of SharePoint software.

Limit Maximum Value Limit Type
Web application 20 per farm Supported
Zone 5 per web application Boundary
Managed path 20 per web application Supported
Solution cache size 300 MB per web application Threshold
Application Pools 20 per farm Supported
Number of content databases 500 per farm Supported
Content database size (general usage scenarios) 200 GB per content database Supported
Content database size (all usage scenarios) 4 TB per content database Supported
Content database size (document archive scenario) No explicit content database limit Supported
Content database items 60 million (includes documents and items) Supported
Site collections per content database 10,000 Supported
Site collections per farm 750,000 Supported
Web site 250,000 per site collection Supported
Site collection size Max size of the content database Supported
List row size 8,000 bytes per row Boundary
File size 2 GB Boundary
Documents 30,000,000 per library Supported
Major versions 400,000 Supported
Minor versions 511 Boundary
Items 30,000,000 per list Supported
Rows size limit 6 table rows internal to the database used for a list or library item Supported
Bulk operations 100 items per bulk operation Boundary
List view lookup threshold 8 join operations per query Threshold
List view threshold 5,000 Threshold
List view threshold for auditors and administrators 20,000 Threshold
Subsite 2,000 per site view Threshold
Coauthoring in Word and PowerPoint for .docx, .pptx and .ppsx files 10 concurrent editors per document Threshold
Security scope 1,000 per list Threshold

In SharePoint search the most of time people annoying that actual content and search content not in Sync. So the search administrator keeps hitting head on the wall and putting same excuse in front of stakeholders “please wait for the next incremental crawl, most of the time :)”. As we know we already have two content crawling methods first is “Full Crawl” and second is “Incremental crawl”.


Disadvantage of the “Full Crawl” and “Incremental Crawl” as both can’t run in a parallel i.e. the content change during the crawl, it required next incremental crawl.
So what is new in continues crawl?
The content source that using continues crawl that run in parallel. The default waiting time is 15 min. So the default wait time can change via PowerShell , no UI for that. Now the content is up to date most of the time. This crawler only for SharePoint content source, so the job of the SharePoint administrator need to identify those content which are keep updating on the regular interval & also comes under the part of search need to be comes under “Continues crawl “category. The “Continuous Crawl” is a type of crawl that aims to maintain the index as current as possible. So the following are list of crawl are available in SharePoint 2013 search architecture.
  1. Run By User
    • Full Crawl
    • Incremental Crawl
    • Continues crawl
  2. Run By system (automated crawl)
    • Incremental Crawl (clean-up)
  1. Run by User: The content source created by user/Administrator and it is trigger/ scheduled by the user.
    • Full Crawl: 
      • Crawl full items
      • Can be scheduled
      • Can be stop and paused
      • When required
        • Change content access account
        • Added new manage properties
        • Content enrichment web service codes change/modified.
        • Add new IFilter
    • Incremental Crawl:
      • Crawl last modified content
      • Can be scheduled
      • Can be stop and paused
      • When required
        • Crawl last modified content
    • Continues Crawl
      • Index as current as possible.
      • Cannot be scheduled
      • Cannot be stop and paused (Once started, a “Continuous Crawl” can’t be paused or stopped, you can just disable it.)
      • When required
        • Content frequently changed (Multiple instance can be run in parallel).
        • Only for SharePoint Content Source
        • E-commerce site in crass site publishing mode.
  2. Run by System: The crawl run automatically by the timer job.
    • Clean-Up continues crawl (Microsoft definition): A continuous crawl does not process or retry items that return errors more than three times. A “clean-up” incremental crawl automatically runs every four hours for content sources that have continuous crawl enabled to re-crawl any items that repeatedly return errors. This incremental crawl will try to crawl the item again and then will postpone retries if the error persists.


SharePoint 2013: Continuous Crawl and the Difference Between Incremental and Continuous Crawl

With the new version of SharePoint a new type of crawl appeared in 2013 named « Continuous Crawl ».  For Old schools like me on SharePoint 2010 we had 2 crawls available and configurable on our Search Service Application.

  • Full : Crawl all content,
  • Incremental : As the name is says, it crawls content has been modified since the last crawl.

The disadvantage of these crawls, is that once launched, you are not able to launch a second in parallel (on the same content source), and therefore the content changed in the meantime we will need to wait until the current crawl is finished (crawl and another) to be integrated into the index, and therefore to be found via search. An example :

  • A incremental crawl named ALFA is started and will last 50 take minutes,
  • After 10 minutes of crawling a new document has been added, so we need a second incremental crawl named BETA to get the document in the index.
  • This item will have to wait at least 40 minutes to be integrated into the index.


So, we can’t keep an updated index with the latest changes, because latency is invited in each crawling process. It is possible that in most of cases this operation is suitable and favorable for your clients, but for those who want to search their content immediately or after their integration into SharePoint there is now a new solution in SharePoint: “Continuous Crawl“.


The Continuous Crawl  So resuming: The “Continuous Crawl” is a type of crawl that aims to maintain the index as current as possible.

His operation is simple: once activated, it will launch the crawl at regular intervals. The major difference with incremental crawl is that the crawl can run in parallel, and do not expect that the crawl is completed prior to launch.

Important Points:

  • “Continuous Crawl” is only available for sources of content type “SharePoint Sites”
  •  By default, a new crawl is run every 15 minutes, but the SharePoint administrator can change this interval using the PowerShell cmdlet Set-SPEnterpriseSearchCrawlContentSource  ,
  • Once started, a “Continuous Crawl” can’t be paused or stopped, you can just disable it.

If we take our example above with “Continuous Crawl”:

  •  Our ALFA crawl starts and will take at least 50 minutes,
  •  After 10 minutes of crawling an item already crawl is hereby amended, and requires a new crawl.
  •  Crawl “BETA” is launched,
  •  The crawl “BETA” starts in (15-10) minutes,
  •  Therefore this item will not need to wait 5 minutes (instead of 50 minutes) to be integrated into the index.



1- How to Enable it?

In Central Administration, click on your search service application, and then in the menu on the “Content Sources”


Clique on « New Content Source » at the menu


Chose « SharePoint Sites »


Select « Enable Continuous Crawls »



  • The content source has been created so we can see his status on « Crawling Continuous »


2 – How to disable it?

  •  From the content source page, chose the option “Enable Incremental Crawls” option. This will disable the continuous crawl.
  •  Save changes.


3 – How to see if it works ?

Click on your service application search then “Crawl Log” in the section “Diagnostics”.


Select your Content Source and click on « View crawl history »

Or via PowerShell Execute the followoing cmdlets  $SearchSA = « Search Service» Get-SPEnterpriseSearchCrawlContentSource -SearchApplication $SearchSA | select *


Impact on our Servers

The impact of a “Continuous Crawl” is the same as an incremental crawl. At the parallel execution of crawls, the “Continuous Crawl” within the parameters defined in the “Crawler Impact Rule” which controls the maximum number of requests that can be executed by the server (default 8).

SPQuery List Joins

Posted: January 3, 2014 in SharePoint
Tags: ,
SharePoint 2010 Step by Step SPQuery List joins Using CAML
I am using the example following example lists
  1. CustomerCity List
  2. Customer
List Columns
CustomerCity columns are:
Single Line text
Customer columns are:
Single Line text
Lookup type of CustomerCity
Dummy Data used
For CustomerCity
For Customer
Using the Join in SharePoint 2010 List Using SPQuery Class.
We need to set the three most important properties for that.
  1. Joins
  2. ProjectedFields
  3. ViewFields
SharePoint 2010 adds Join To CAML SPQuery.Joins
Types of joins
  1. Inner
  2. Left
Requested Lookup columns
Projections allow inclusion of fields joined lists
Joins: Each join is represented by a Join element child of the Joins element. Only inner and left outer joins are permitted. Moreover, the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. There can be joins to multiple lists, multiple joins to the same list, and chains of joins. If a given list is the foreign list in more than one join, it must have distinct aliases assigned to it by the ListAliasattributes of the Join elements representing the joins.
Note: Multiple Lines of text, Choice type columns are not supported in ProjectedFields.
private void button1_Click(object sender, EventArgs e)
            string siteUrl = “http://home&#8221;;
            SPWeb _web = new SPSite(siteUrl).OpenWeb();
            var items =_web.Lists[“Customer”].GetItems(GetQuery());
            foreach (SPListItem item in items)
                MessageBox.Show(string.Format(“{0}—-{1}”, item[“Title”], item[“CityTitle”]));
        private SPQuery GetQuery()
            SPQuery _query = new SPQuery();
            _query.Query = “”;
            _query.Joins = @”<Join Type=’INNER’ ListAlias=’City’>
                          <!–List Name: CustomerCity–>
                            <FieldRef Name=’City’ RefType=’ID’ />
                            <FieldRef List=’City’ Name=’ID’ />
            _query.ProjectedFields = @”<Field Name=’CityTitle’ Type=’Lookup’ List=’City’ ShowField=’Title’ />
                                    <Field Name=’CityContentTypeId’ Type=’Lookup’ List=’City’ ShowField=’ContentTypeId’ />”;
            _query.ViewFields = @” <FieldRef Name=’Title’ />
                                     <FieldRef Name=’CityTitle’ />”;
            return _query;
The above “Query” property of the SPQuery class I left blank you can enter the condition according to your requirement.