Quantcast
Channel: Jason Lee's Blog
Viewing all 56 articles
Browse latest View live

Understanding List Query Throttling Limits in SharePoint 2010

$
0
0
By now, most SharePoint developers will have come across the list query throttling settings in SharePoint 2010. Essentially, farm administrators can impose limits on the number of items returned by list queries, in order to protect the performance of the farm as a whole. Limits are applied on a per-Web application basis and are typically managed through the Web application settings in Central Admin.


So far, so good. The concepts of query throttling are well documented, and the rationale will be obvious to anyone who has seen a SharePoint environment grind to a halt under heavy-handed list queries. (For a good explanation of query throttling, together with information on how you can avoid hitting the limits through careful indexing, take a look at Query Throttling and Indexing by the patterns & practices team.) However, it's not always entirely clear how these settings are applied.

First of all, "administrators" is a loose term. Let's clarify who qualifies as "auditors and administrators" for the purposes of these settings. Site collection administrators do not qualify. Farm administrators do not qualify. The only people who qualify are users who have specific permission levels assigned at the Web application level. Specifically, the policy level assigned to the user must include the Site Collection Administrator or the Site Collection Auditor permissions, as shown below.



Now for the bit that took me a little longer to grasp. What does the object model override actually do? Firstly, it doesn't allow you to submit database queries that hit an unlimited number of rows in the database. Secondly, it doesn't change the list view threshold for regular users at all. All the object model override does is allow our auditors and administrators, as defined by the Web application user policy, to submit queries at the higher threshold value. In other words, if you don't use the object model override, auditors and administrators are stuck with the same standard list view threshold as everyone else.

To dig a little deeper into how these thresholds are applied, I provisioned a basic list and used a feature receiver to add 10,000 items. This puts me nicely between the lower threshold and the upper threshold. Next, I created a Web Part that attempts to retrieve all the items from the list. The core code is as follows:

SPWeb web = SPContext.Current.Web;
SPList list = web.Lists["BigList"];
SPQuery query = new SPQuery();
query.QueryThrottleMode = SPQueryThrottleOption.Override;
SPListItemCollection items = list.GetItems(query);
litMessage.Text = String.Format("This list contains {0} items", items.Count);


The important bit is the 4th line down:

query.QueryThrottleMode = SPQueryThrottleOption.Override;

The SPQueryThrottleOption enumeration has three values: Default, Override, and Strict. If you use the default value, the standard list view threshold applies to all users except local server administrators, who are not bound by either threshold. If you set the query throttle mode to Override, users who have the required permissions in the Web application user policy can query at the higher "auditors and administrators" threshold. Local server administrators remain unbound by either threshold. Finally, if you set the query throttle mode to Strict, this closes down the local server administrator loophole and the standard list view threshold applies to all users.

The following table shows which threshold applies to which users for each of the SPQueryThrottleOption values:

Type of userDefaultOverrideStrict
Site memberStandardStandardStandard
Site ownerStandardStandardStandard
Site collection adminStandardStandardStandard
Web app policy: site collection adminStandardHigherStandard
Web app policy: site collection auditorStandardHigherStandard
Farm adminStandardStandardStandard
Local server adminUnlimitedUnlimitedStandard

Finally, I found an interesting quirk for local server admins. The list view threshold exemptions for local server administrators apply only to users who are explicit members of the Administrators group on the local server. For example, domain admins are implicit members of the local Administrators group by virtue of their membership of the Domain Admins group. However, the standard list view threshold applied to my test domain admin account.

I hope this helps to clarify things for anyone else who's confused by list view thresholds.

If you want to know more, Steve Peschka's blog is the best source of information I've seen in this area.


Where Are the SharePoint Client Assemblies?

$
0
0

If you're reading this post, you probably know that SharePoint 2010 provides client-side APIs for Silverlight apps, managed .NET clients, and JavaScript code. However, the assemblies you need in order to start developing SharePoint client apps can be elusive at first. In particular, the Silverlight assemblies aren't where you might expect. All the assemblies and JavaScript libraries that you need for client-side development are deployed to folders beneath the SharePoint root when you install SharePoint 2010. For the record, here's where you can find them.

  • If you’re developing a managed .NET client application—for example, a WPF application—you need to add references to Microsoft.SharePoint.Client.dll and Microsoft.SharePoint.Client.Runtime.dll. You can find them in the 14\ISAPI folder on your SharePoint server.

  • If you’re developing a Silverlight application, you need to add references to Microsoft.SharePoint.Client.Silverlight.dll and Microsoft.SharePoint.Client.Silverlight.Runtime.dll. You can find them in the 14\TEMPLATE\LAYOUTS\ClientBin folder on your SharePoint server.

  • If you’re writing JavaScript code, you can find SP.js and all the other SharePoint JavaScript libraries in the 14\TEMPLATE\LAYOUTS folder.

I hope this saves someone the trouble of trawling the SharePoint root looking for assemblies!

Update 10th March 2011: I keep reading that the Silverlight client assemblies are in the 14\ISAPI\ClientBin folder. This folder doesn't exist in any of my installations.

Using the SharePoint 2010 Silverlight Client Object Model to Retrieve Documents

$
0
0

This week I've been working on migrating a Silverlight application to SharePoint 2010. The application in question uses some fairly complex XML files as a data source, and currently relies on a custom Web service to retrieve and update these files. We want to modify the application to retrieve the XML files from a SharePoint 2010 document library. MSDN provides a good article on how to use the managed .NET client object model for SharePoint 2010 to retrieve and update documents in a SharePoint document library. However, this scenario becomes a little more challenging from a Silverlight client, as some of the required classes are unavailable in the Silverlight version of the client object model.

When you work with the managed client object model, the recommended approach for retrieving the contents of a file is to call the synchronous File.OpenBinaryDirect method. This returns a FileInformation instance that exposes the contents of the file as a stream. However, the FileInformation class is not included in the Silverlight client object model. Instead, the Silverlight client object model includes an alternative, asynchronous version of the File.OpenBinaryDirect method. This returns null, but exposes the contents of the file as a stream through the event arguments in a callback method.

Let's take a look at the code. Suppose we want to retrieve both the metadata for the file and the contents of the file.

ClientContext context = ClientContext.Current;
List targetList =
context.Web.Lists.GetByTitle("My Document Library");
CamlQuery query = new CamlQuery();
query.ViewXml =
   @"<View Scope='RecursiveAll'>
      <Query>
         <Where>
            <Eq>
               <FieldRef Name='FileLeafRef' />
               <Value Type='Text'>input.xml</Value>
            </Eq>
         </Where>
      </Query>
   </View>";

ListItemCollection targetListItems =
   targetList.GetItems(query);
context.Load(targetListItems);
context.ExecuteQuery();

We can now retrieve document metadata from the list item. For example, we could use the following code to establish when the document was created.

if(targetListItems.Count == 1)
{
   ListItem item = targetListItems[0];
   DateTime createdDate =
      Convert.ToDateTime(item["Created_x0020_Date"]);
}

To get the contents of the file, we use the Microsoft.SharePoint.Client.File.OpenBinaryDirect method and specify callback methods:

String serverRelativeUrl =
   @"/sitename/libraryname/foldername/input.xml";
File.OpenBinaryDirect(context, serverRelativeUrl,
   OnOpenSucceeded, OnOpenFailed);

In the callback method, we can read the contents of the file from the stream and do something useful with it.


private void OnOpenSucceeded(object sender, OpenBinarySucceededEventArgs args)
{
   StreamReader strReader = new StreamReader(args.Stream);
   String fileContents = strReader.ReadToEnd();
   strReader.Close();

   //Do something with the file contents
}

In a nutshell, that's how you retrieve SharePoint 2010 documents from a Silverlight client. Note that I used the synchronous ExecuteQuery method, rather than the asynchronous ExecuteQueryAsync method, to send my queries to the server. Silverlight will not allow you to block the UI thread, so if you want to use this approach you need to run your code on a background thread (for example, by using ThreadPool.QueueUserWorkItem to invoke your logic). You might find this approach preferable if you need to send multiple queries to the server—otherwise you can end up with a tangled web of nested callback methods.

Next time, I'll take a look at creating, updating, and deleting documents from a Silverlight client.

SharePoint 2010 Query Thresholds Bite You When You Least Expect It

$
0
0
In a previous post, Understanding List Query Throttling Limits in SharePoint 2010, I talked about how SharePoint 2010 applies query throttling to list queries and how you can work around the query thresholds. In this post I just wanted to add how list query thresholds can cause errors when you least expect it.

Today I was trying to export a SharePoint team site as a WSP. The Save site as template operation kept failing with an unexpected error. I took a look at the event logs, and found the following exception message:

Error exporting the list named "BigList" at the URL: Lists/BigList

"Fine", I thought to myself. I'm not particularly interested in that list, I'll simply delete it. I tried to delete the list and got hit with another runtime error. This time the event logs were more helpful:

Exception type: SPQueryThrottledException
Exception message: The attempted operation is prohibited because it exceeds the list view threshold enforced by the administrator.

Now things were starting to make more sense. The list I was trying to export, and then trying to delete, contains 10,000 items. Even though I'm not explicitly trying to retrieve all 10,000 items, both the export operation and the delete operation will hit all 10,000 rows in the database. The list view threshold is kicking in and blocking the operation.

"Fine", I thought to myself again. I'll grant myself full control permissions in the Web application user policy, thereby giving myself the higher auditors and administrators threshold for list operations. Wrong again. As I pointed out in my earlier post, the higher threshold only applies to programmatic queries in which you explicitly invoke the object model override. I was still unable to delete the list through the UI.

So how did I finally get rid of the large and unwanted list? I remembered that local server administrators are exempted from list view thresholds by default. I opened up a PowerShell window, using the local server administrator account, and ran a few cmdlets to delete the list once and for all.

$site = get-spsite http://mysiteurl
$web = $site.rootweb
$list = $web.lists["BigList"]
$list.delete()
$web.dispose()
$site.dispose()

And the list was gone. If you plan on creating large lists on your SharePoint 2010 sites, it's worth bearing in mind that they're not quite so straightforward to export, move or get rid of once you're finished with them.

Using the SharePoint 2010 Silverlight Client Object Model to Update Documents

$
0
0

Earlier this month, I blogged on how you can use the Silverlight client object model to retrieve files from a SharePoint document library. This time, let's take a look at how you can add or update files in a document library from Silverlight.

Just like the process for retrieving files, the process for adding or updating files differs between managed .NET clients and Silverlight clients. The Silverlight client object model does not support the File.SaveBinaryDirect method, so the recommended approach for managed clients is not available to us. From a Silverlight client, the high-level process is as follows:

  • Convert the contents for your new file to a byte array
  • Create a FileCreationInformation instance to represent the new file
  • Add the file to a folder in a document library

The code should resemble the following:

ClientContext context = ClientContext.Current;
String fileContents = "This is the contents of my file";
String fileUrl = String.Format(@"{0}/{1}/{2}/{3}",
   new String[]
      {context.Url, libraryPath, folderName, filename});

//Convert the file contents to a byte array
System.Text.UTF8Encoding encoding =
   new System.Text.UTF8Encoding();
Byte[] fileBytes = encoding.GetBytes(fileContents);

//Create an object to represent the file
FileCreationInformation fileCreationInfo =
   new FileCreationInformation();
fileCreationInfo.Url = fileUrl;
fileCreationInfo.Content = fileBytes;
//Overwrite the file if it exists, create if it doesn't
fileCreationInfo.Overwrite = true;

//Add the file to a library
List targetList =
   context.Web.Lists.GetByTitle("My Document Library");
targetList.RootFolder.Files.Add(fileCreationInfo);
targetList.Update();
context.ExecuteQueryAsync(SaveFileSucceeded,
   SaveFileFailed);

And that's how you save a file to a SharePoint document library. You don't need to do anything specific in the callback methods, other than check for errors or report success back to the user. Note that you don't need to add your file to a specific folder in the document library—you can simply add it to the root folder, and SharePoint will use the URL you provided to put it in the right place. Unlike the server-side object model, the Silverlight client object model doesn't expose a collection of files on the Web object.

One limitation of this approach is that it doesn't allow you to specify a content type or provide any metadata for the file. I plan to look a little deeper into this in a later post.

Specifying Content Types from the SharePoint 2010 Silverlight Client Object Model

$
0
0

A few weeks ago, I wrote about how you can use the Silverlight client object model to upload files to a SharePoint document library. One of the limitations of this process is that it doesn't allow you to specify a content type or provide any metadata for the document you're uploading. In this post, I look at how you can programmatically provide this missing information.

As with most client-side operations for SharePoint 2010, the process is a little more complex from a Silverlight client than from a managed .NET client, as many useful methods and properties are unavailable. From a Silverlight client, you need to use the following high-level steps:

  • Upload the file
  • Retrieve the list item corresponding to the file
  • Update the field values of the list item to set the content type and any other required metadata

Let's take a look at how this works in code. Because we're working with a document library, you must upload the file as the first step –SharePoint won't allow you to create a list item first and upload the document when you're finished providing metadata. I covered uploading a document in a fair amount of detail last time, so let's assume we've done that already. The next step is to retrieve the list item that SharePoint created when we uploaded the document.

Since we need to execute more than one query, it's easier to queue our logic to run on a background thread. This means we can execute queries synchronously rather than creating multiple nested callbacks, which get difficult to untangle after a while.

ClientContext context = ClientContext.Current; System.Threading.ThreadPool.QueueUserWorkItem(
   new System.Threading.WaitCallback(UpdateMetadata), context);

In the callback method, the first step is to submit a CAML query that retrieves the list item corresponding to our document. Notice that we also load the collection of available content types. You'll see why in a bit.

private void UpdateMetadata(object state)
{
   ClientContext context = (ClientContext)state;
   Web web = context.Web;
   List list =
      context.Web.Lists.GetByTitle("My Document Library");
   CamlQuery query = new CamlQuery();
   query.ViewXml = @"
      <View>
         <Query>
            <Where>
               <Eq>
                  <FieldRef Name='FileLeafRef'/>
                  <Value Type='Text'>Sample.txt</Value>
               </Eq>
            </Where>
         </Query>
         <RowLimit>10</RowLimit>
      </View>";
   ListItemCollection items = list.GetItems(query);
   context.Load(items);
   ContentTypeCollection contentTypes =
      context.Web.AvailableContentTypes;
   context.Load(cts);
   context.ExecuteQuery();

Let's assume we want to assign an arbitrary content type named "Chapter" to our list item. To set the content type of a list item, we need to set the value of the ContentTypeId field. In the Silverlight client object model, the ContentTypeCollection class doesn't allow you to use the name of the content type as an indexer. Instead, we can use a simple LINQ expression to get the ID of our Chapter content type.

   var ctid = from ct in contentTypes
              where ct.Name == "Chapter"
              select ct.Id;

We can now set the content type of our document and provide any required metadata.

   ListItem item = items[0];
   item["ContentTypeId"] = ctid;
   item["PublishingContactName"] = "Jason L";
   item["PublishingContactEmail"] = "jason@example.com";
   item.Update();
   context.ExecuteQuery();
}

In a real-world application, you'd obviously need to check that your query returned one unique list item, build in error handling, and so on. However, hopefully this provides enough information to get you started.

List Relationships and Cascading Dropdowns in SharePoint and InfoPath

$
0
0
Here's the situation. I have two lists on a SharePoint 2010 site – let's call them Product Categories and Products. The Products list includes a lookup column that points to the Product Categories list, so users can associate a category with a product. I need to use these lists to provide choices that users can select from within an InfoPath 2010 form. This is how I want the form to work:
  • The user selects a product category from a dropdown list.
  • The form filters the list of products based on the selected category.
  • The user selects a product from the filtered list of products.
This might sound trivial, but it took me a while to work out the nuances and it doesn't seem to be particularly well documented anywhere, so I figured I'd share it. Essentially, InfoPath 2010 includes a new feature that allows you to specify query fields when you connect to a SharePoint list. This allows you to create cascading dropdowns without resorting to custom code, custom data sources or Web services.

Here's a walkthrough of the process. Remember that Product Categories is our "master" list and Products is our "details" list. I'll assume a rudimentary knowledge of InfoPath in that you're familiar with data connections, binding controls to fields and so on.

First, create a secondary data connection to the Product Categories list. This is straightforward, the list only contains one field. Ensure that you leave the Automatically retrieve data when form is opened option selected.


Next, create a data connection to the Products list. When you select the fields you want to include, ensure you select the Category (lookup) field as well as the Product field.


On the last page of the wizard, ensure you clear the Automatically retrieve data when form is opened option, and then click Finish. We don't want the form to retrieve a list of products until we've specified the category value we want to use as a filter.


Build your form template. I've used dropdown lists to represent the product category and the product. Both controls are bound to simple text fields in the main data source.



In the properties for the Product Category control, configure the dropdown to retrieve choices from the Product Categories data source that you created in step 1.


Ensure that you select the ID column as the Value field. (Lookup columns only store the ID field from the related list, so we'll need to match these ID values to the category lookup in the Products list).

In the properties for the Products control, configure the dropdown to retrieve choices from the Products data source that you created in step 2. (Note that the data source is actually called PLC Products in my screen captures.)



At this point, the controls are set up to:
  • Retrieve choices from our SharePoint lists.
  • Store the user selections in the main data source.
We can now use InfoPath rules to set up the cascade filtering we're looking for. Select the Category control. On the Home tab, on the Add Rule dropdown, click This Field Changes, and then click Set a Field's Value. This launches the Rule Details dialog.

Click the button to the right of the Field text box. In the Select a Field or Group dialog, select the Products data connection, expand queryFields, select the Category field, and then click OK.



By setting the value of this field, we are configuring the Products data connection to only return product records where the product category matches our specified value.

Click the function button to the right of the Value text box, and then click Insert Field or Group. Ensure the Main data connection is selected, select the field that stores the product category value selected by the user, and then click OK.



We have now set the value of our query field to the ID of the category selected by the user. The Rule Details dialog should resemble the following.


Click OK to close the Rule Details dialog. Now that we've set our query field, we can call on the Products data connection to populate the Products dropdown list. In the Rules pane, on the Add dropdown, click Query for data.


Under Data connection, select Products, and then click OK.


Now, when the user selects a category from the Product Category dropdown, the products list is automatically restricted to those products with a matching category value. It's easy once you know how…


If I could emphasise one key point, it's this... ensure you set the value of your query field before you retrieve the data :-)

Conditional Formatting of List Views for SharePoint 2010 – Changing the Font Colour

$
0
0
There are often times when it's useful to draw attention to particular items in a SharePoint list or library. For example, you might want to highlight overdue tasks, colour-code items according to priority, or draw attention to undesirable information. In other words, you want to apply conditional formatting based on field values. Now, as you probably know, SharePoint 2010 uses XSLT-based list views by default. By editing the XSLT for a list view you can apply all manner of rules and conditional formatting. Even better, SharePoint Designer 2010 includes some built-in tools that will figure out the XSLT for you. In the List View Tools tab group, on the Options tab, there's a handy dropdown menu:

In most cases, you're probably going to want to apply conditional formatting by row. First you set your conditions:

















Then you choose the styles you want to apply when the conditions are met.

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

When you've set your style, SharePoint Designer modifies the XSLT so that your formatting is rendered as an inline style on the row (<tr>) element when your conditions are met. I've stripped out some of the less relevant attributes to improve readability.


<tr>
  <xsl:attribute name="style">
    <xsl:if test="normalize-space($thisNode/@Status1) != 'Published'
                  and ddwrt:DateTimeTick(ddwrt:GenDisplayName(
                  string(
$thisNode/@Target_x0020_Pub_x0020_Date1)))
                  &lt;= ddwrt:DateTimeTick(ddwrt:GenDisplayName(

                  string($Today)))">
      background-color: #FFD7D7;

      color: #FF0000 !important;
      font-weight: bold;
    </xsl:if>
  ...
 
 
So far so good. Everything looks correct in SharePoint Designer. However, if you've tried this, you might have found that when you load the list view in the browser you get mixed results – in particular:
  • Changing background colours and text decorations (such as bolding or italics) works fine.
  • Changing fonts or font colours works in SharePoint Designer but doesn't work in the browser.
As a result, you'll end up with something like this when you view the list through the browser:










A quick web search revealed that I'm not the first person to encounter this, and I've yet to see a definitive answer, so here goes.

Short version – the problem is down to the way the default styles are structured in corev4.css. Row styles do not cascade down to individual cells (regardless of whether you append an !important flag). If you want to change the background colour, apply conditional formatting at the row level. If you want to change the font colour, and you don't fancy messing around with CSS, apply conditional formatting at the column level.

Long version – read on for a more detailed explanation…

Using IE Developer Tools, you can take a look at how styles are applied to individual HTML elements on the page. The XSLT rule created by SharePoint Designer applies an inline style to a row (tr) element. If we use IE Developer Tools to look at how the row is styled, we can see that everything looks correct – the inline style takes precedence:
















However, if we look at how the text within one of the individual cells is styled, you can see that our inline style at the tr level is getting overridden by a more specific style, defined by the ms-vb2 class, at the td level.




















Unfortunately there's not much we can do about this. If you want to modify any of the styles defined by the ms-vb2 class, such as font, font size and font colour, you need to create a column-level rule rather than a row-level rule. The two types of rules work in exactly the same way in SharePoint Designer—when you create a column-level rule, you can still set conditions based on any field value, not just the column to which you are applying the conditional formatting. If you want to conditionally change the font colour of an entire row, you simply create a column-level rule on every column. This time, SharePoint Designer modifies the XSLT so that your formatting is rendered as an inline style on the column (<td>) element when your conditions are met:

<td>
  <xsl:attribute name="style">
    <xsl:if test="ddwrt:DateTimeTick(ddwrt:GenDisplayName(

                  string($thisNode/@Target_x0020_Pub_x0020_Date1)))
                  &lt;= 
ddwrt:DateTimeTick(
                  ddwrt:GenDisplayName(string($Today)))">
      color: #FF0000;
    </xsl:if>
  </xsl:attribute>

  ...

This time the browser will render the view as expected. The following image shows the results of a column-level rule on the Target Pub Date column, in addition to the row-level rule described earlier.










If we take a look at the CSS, we can see that our inline column style is overriding the styles provided by ms-vb2.
























In summary, there's no real difference between row-level conditional formatting and column-level conditional formatting, other than the scope at which your inline styles are applied. In practice you may often need to use a combination of the two in order to realise a particular style or effect.

Very Slow Upload Speeds to SharePoint Document Libraries?

$
0
0
Using Windows 7 or Windows Server 2008 R2? The problem could be the LAN settings in Internet Explorer.

Here at Content Master we've been trying to get to the bottom of a problem where some users were having trouble uploading files to a SharePoint 2007 deployment. In each case, the users were opening the document library in Windows Explorer and dragging files across (in other words, uploading files using WebDAV over SSL). Reported upload speeds were dropping as low as 1-2kb/second and users were cursing SharePoint left, right and centre.

The front end server and the database server were showing very little load, and the fact that some users seemed unaffected suggested that this was a client-side problem. After much head-scratching, I stumbled across a post from SharePointNation, and a thread on TechNet, that reported how a similar problem was solved by changing the LAN settings in Internet Explorer. I tried it myself, with some degree of scepticism - and it immediately solved the problem for all affected users (IE9 in our case, but I believe the same applies to other versions). Here are the steps:
  1. In Internet Explorer, on the Tools menu, click Internet Options.
  2. On the Connections tab, click LAN settings.
  3. On the Local Area Network (LAN) Settings dialog, under Automatic configuration, clear the Automatically detect settings check box.
















I hope this post saves someone from a headache...

Working with the Documents Tab on the SharePoint Ribbon

$
0
0
This week I've been taking a look at using ribbon controls with the SharePoint JavaScript client object model to drive some custom functionality. Ribbon customizations for SharePoint 2010 are fairly well documented. However, when you work with contextual tab groups—and the Documents tab in particular—there are a few nuances and idiosyncrasies that it's worth being aware of up front.

In this case, I want to add a ribbon button that enables the user to perform some additional actions when they select a file in a document library. There are countless scenarios in which you might want to do this – for example, you might add a "Request a copy of this document in large print/audio format/Welsh" control to the ribbon and use the document metadata to prepopulate an InfoPath form. To start with, however, I want to keep it simple:

  • When the user selects a document in a document library, display a button on the ribbon.
  • When the user clicks the button, display some information about the selected document as a client-side notification.

The logical place to put this button is on the Documents tab. This is part of the Library Tools contextual tab group – it's contextual because it's only displayed when the context is relevant, i.e. when the user browses to a document library. The Documents tab is selected automatically when the user selects one or more documents in the library's list view web part:










Let's take it one bit at a time for now, and I'll provide a full code listing at the bottom. Firstly, like all declarative ribbon customizations, we start with a CustomAction feature element:

<CustomActionId="Jason.SP.GSD"
              Location="CommandUI.Ribbon"
              Sequence="11"
              RegistrationType="List"
              RegistrationId="101">


The key point of note here is that if you plan to add controls to a contextual tab, you must use the RegistrationType and RegistrationId attributes to target your ribbon customizations to an appropriate list type. If you're deploying controls to a standard ribbon tab, you can get away with omitting these attributes. It didn't initially occur to me that it should be any different in this case – I'm adding controls to the Documents tab, the Documents tab only shows up when I'm looking at a document library, I shouldn't have to worry about scope, right? But no – if you don't set these attributes, your controls simply won't show up on the tab. In this case, a RegistrationType of "List" and a RegistrationId of "101" scopes our ribbon customization to the document library base type.


Next, we define the controls we want to add to the ribbon. This process is identical regardless of whether you're adding to a contextual tab or a regular tab. In this case, we want to add a new group named "Jason's Actions" to the Documents tab. Within this group, we want to create a single button labelled "Get Selection Details". To accomplish this we need to create two CommandUIDefinition elements – one to define the maximum size of my group element, and one to define the group itself. Creating tabs, groups, and controls has been covered comprehensively elsewhere, so I don't want to go into too much detail – if you're looking for more information in this area, Chris O'Brien's blog post series is an excellent place to start. 

In this case, we'll use the absolute minimum markup required to add a new button in its own group - a Group element to define the group and the controls within it, and a MaxSize element that defines how the group should be rendered on the ribbon. You can specify many more elements if you want – for example you can add a Scale element to specify how your group should render at different sizes, and you can define your own GroupTemplate to specify precisely how controls within your group should be arranged. However, each group must have a matching MaxSize element – otherwise it won't appear on the tab. The easiest approach to creating ribbon controls is to pick out existing controls that resemble what you're looking for and take a look at how they're defined. Let's say we want our group and button to look like the Share & Track group shown here – a large, simple layout with a single control:







To replicate this group, the first step is to take a look at the group definition. Ribbon controls are defined in the 14\TEMPLATE\GLOBAL\XML\CMDUI.XML file. To find specific elements in this file, unless you know the ID of the element you're looking for, it's best to start with the top-level elements and narrow down your search – start by finding the right tab group (Id="Ribbon.LibraryContextualGroup"), then locate the correct tab (Id="Ribbon.Document"), then identify the group you're looking for. In this case, the group we want to borrow from has an ID of "Ribbon.Documents.Share":

<GroupId="Ribbon.Documents.Share"
       Sequence="40"
       Command="ShareGroup"
       Description=""
       Title="$Resources:core,cui_GrpShare;"
       Image32by32Popup=".../formatmap32x32.png" 
       Image32by32PopupTop="-128" 
       Image32by32PopupLeft="-64"
       Template="Ribbon.Templates.Flexible2">

By examining the definition of this group, we can figure out the properties we need:

  • The Share & Track group has a Template attribute of Ribbon.Templates.Flexible2. This identifies the group template that gets applied to the group (also defined in CMDUI.XML if you want to take a closer look). We'll use this value to apply the same layout to our own group.
  • The Share & Track group has a Sequence attribute of 40. We'll use a value of 41 to place our group immediately to the right of the Share & Track group.
Next, we can take a look at how controls are defined within the group. For example, the following markup defines the E-mail a Link button you saw in the previous image:

<Button Id="Ribbon.Documents.Share.EmailItemLink"
        Sequence="10"
        Command="EmailLink"
        Image16by16=".../formatmap16x16.png" 
        Image16by16Top="-16" 
        Image16by16Left="-88"
        Image32by32=".../formatmap32x32.png" 
        Image32by32Top="-128" 
        Image32by32Left="-448"
        LabelText="$Resources:core,cui_ButEmailLink;"
        ToolTipTitle="$Resources:core,cui_ButEmailLink;"
        ToolTipDescription="...,cui_STT_ButEmailLinkDocument;"
        TemplateAlias="o1"
/>

In this case, the TemplateAlias attribute is the value that interests us. Every group template contains one or more placeholders, represented by ControlRef elements, in which you can place your controls. In this case, the E-mail a link button specifies that it should be added to the o1 placeholder in the Flexible2 group template. If we use the same value in our own button, we should get the same result.


Finally, we can also take a look at the matching MaxSize element for the Share & Track group. Remember that these elements are always paired – a Group element always has a corresponding MaxSize element defined within the same tab. Within each MaxSize element, the GroupId attribute identifies the corresponding group:

<MaxSize Id="Ribbon.Documents.Scaling.Share.MaxSize"
         Sequence="40"
         GroupId="Ribbon.Documents.Share"
         Size="LargeLarge" 
/>

In this case, all we're interested in is the Size attribute. A group template can define multiple layouts, and this attribute identifies the specific layout in the Flexible2 template that we want to use – in this case, the LargeLarge layout. 


We can now use all this information we've collected to define our group and button:

<CommandUIExtension>
  <CommandUIDefinitions>
    <CommandUIDefinitionLocation="Ribbon.Documents.Scaling._children"> 
      <MaxSizeId="Jason.SP.GSD.JasonsActions.MaxSize"
        Sequence="11"
        GroupId="Jason.SP.GSD.JasonsActions"
        Size="LargeLarge" /> 
    </CommandUIDefinition>
    <CommandUIDefinitionLocation="Ribbon.Documents.Groups._children">
      <GroupId="Jason.SP.GSD.JasonsActions"
        Sequence="41"
        Title="Jason's Actions"
        Description="Contains custom document actions"
        Template="Ribbon.Templates.Flexible2"> 
          <ControlsId="Jason.SP.GSD.JasonsActions.Controls">
            <ButtonId="Jason.SP.GSD.JasonsActions.GetButton"
              Sequence="1"
              Image32by32=".../ThumbsUp.PNG"
              LabelText="Get Selection Details"
              Description="Gets the details of the selected document"
              TemplateAlias="o1"
              Command="Jason.SP.GSD.GetCmd" />
          </Controls>
        </Group>
      </CommandUIDefinition>
    </CommandUIDefinitions>


There are a few additional points worth mentioning at this stage:

  • You need a CommandUIDefinition element for each block of XML you want to add to the ribbon.
  • When setting the Location attribute, imagine you're slotting the XML directly into the CMDUI.XML file. Look up the ID of the parent element you want to add to, and append "._children" to get your Location value. For example, we want to add our group to the Groups element with an ID of "Ribbon.Documents.Groups", so our Location attribute is "Ribbon.Document.Groups._children".
Note that the button has a Command attribute value of "JL.GetSelectionDetails". This ties the button to a CommandUIHandler element in which we can define the JavaScript that should run when the user clicks the button, as shown below:

    <CommandUIHandlers>
      <CommandUIHandler 
        Command="Jason.SP.GSD.GetCmd"
        EnabledScript="javascript:
          SP.ListOperation.Selection.getSelectedItems().length == 1;"
        CommandAction="javascript: 
          var selectedItems = 
            SP.ListOperation.Selection.getSelectedItems();
          var item = selectedItems[0];
          var itemID = item['id'];
          if (item['fsObjType'] == 0) {
            SP.UI.Notify.addNotification(String.format(
              'Document selected: ID={0}', itemID));
          }
          else {
            SP.UI.Notify.addNotification(String.format(
              'Folder selected: ID={0}', itemID));
          }" 
      /> 
    </CommandUIHandlers>
  </CommandUIExtension>
</CustomAction>


The first point of interest is the EnabledScript attribute. When you add a control to the Documents tab it is disabled by default– you must use this attribute to specify the conditions under which the control should be enabled. The EnabledScript attribute should specify (or call) a JavaScript function that returns a Boolean value – true to enable the control, false to disable it. In this case, we want the button to be enabled when the user has selected a single document in the document library list view. The JavaScript client-side object model for SharePoint includes a class named SP.ListOperation.Selection for just this kind of eventuality. We can use the getSelectedItems method to return a collection of the items selected in the list view, then check that the length of the collection is equal to 1.


Note: In this example I've added all my JavaScript logic directly to the CommandUIHandler element. As your JavaScript logic grows larger and more complex, a better option would be to deploy a standalone JavaScript file. Yaroslav Pentarskyy describes this approach in this blog post.


Next, the CommandAction attribute specifies the JavaScript function we want to call when our button is clicked. The getSelectedItems method returns a Dictionary of key-value pairs. The value of each dictionary entry is an object with two attributes – id and fsObjType. The id attribute represents the integer ID of the list item, while the fsObjType attribute represents the type of list item object – 0 for a document or a list item, 1 for a folder. While this doesn't give us a great deal of information about the selected item, the integer ID gives us enough information to submit a query for additional document metadata, should we so wish. In this case, as a proof of concept, we simply display a notification containing the document ID when the user clicks our button.
Here's the button in its default disabled state:








When we select a document, the button is enabled:








When we click the button, a notification displays the integer ID of the selected document:












And that concludes today's task. Next time I plan to cover how to extend this to do something useful with the selected document. The contents of the feature element are shown below in their entirety for reference (apologies for the tiny font, it helps to keep the line breaks to a minimum).


<Elementsxmlns="http://schemas.microsoft.com/sharepoint/">
  <CustomActionId="Jason.SP.GSD" 
                Location="CommandUI.Ribbon" 
                Sequence="11" 
                RegistrationType="List" 
                RegistrationId="101">
    <CommandUIExtension>
      <CommandUIDefinitions>
        <CommandUIDefinitionLocation="Ribbon.Documents.Scaling._children">
          <MaxSizeId=" Jason.SP.GSD.JasonsActions.MaxSize" 
                   Sequence="11" 
                   GroupId="Jason.SP.GSD.JasonsActions" 
                   Size="LargeLarge" />
        </CommandUIDefinition>
        <CommandUIDefinitionLocation="Ribbon.Documents.Groups._children">
          <GroupId=" Jason.SP.GSD.JasonsActions" 
                 Sequence="41" 
                 Title="Jason's Actions" 
                 Description="Contains custom document actions" 
                 Template="Ribbon.Templates.Flexible2">
            <ControlsId="Jason.SP.GSD.JasonsActions.Controls">
              <ButtonId=" Jason.SP.GSD.JasonsActions.GetButton" 
                      Sequence="1" 
                      Image32by32="/SiteCollectionImages/RibbonIcons/ThumbsUp.PNG" 
                      LabelText="Get Selection Details" 
                      Description="Gets the details of the selected document" 
                      TemplateAlias="o1" 
                      Command=" Jason.SP.GSD.GetCmd" />
            </Controls>
          </Group>
        </CommandUIDefinition>
      </CommandUIDefinitions>
      <CommandUIHandlers>
        <CommandUIHandlerCommand="Jason.SP.GSD.GetCmd" 
                          EnabledScript="javascript:
                            SP.ListOperation.Selection.getSelectedItems().length == 1;" 
                          CommandAction="javascript:
                            var selectedItems = 
                              SP.ListOperation.Selection.getSelectedItems();
                            var item = selectedItems[0];
                            var itemID = item['id'];
                            if (item['fsObjType'] == 0) {
                              SP.UI.Notify.addNotification(String.format(
                                'Document selected: ID={0}', itemID));
                            }
                            else {
                              SP.UI.Notify.addNotification(String.format(
                                'Folder selected: ID={0}', itemID));
                            } 
        "/>
      </CommandUIHandlers>
    </CommandUIExtension>
  </CustomAction>
</Elements>

Deploying Web Packages as a Non-Administrator User

$
0
0
Regular readers (all six of you ;-)) will have noticed that I haven’t posted about SharePoint for a while. For the last couple of months I’ve been working with the Developer Guidance team at Microsoft to write some MSDN content on enterprise-scale web deployment and application lifecycle management. I’ll let you know when the content is available, and I don’t plan to duplicate it here. What I want to do is just to draw attention to a couple of areas that I found particularly tricky to figure out.

The first area involves the IIS Web Deployment Tool (commonly known as “Web Deploy”), and a gotcha around deploying web packages as a non-administrator user. For brevity I’ll have to assume that you’re broadly familiar with:


One of the big advantages of Web Deploy 2.0, on IIS 7 or later, is that non-administrator users can deploy web packages to specific IIS web sites. This is generally useful in two scenarios:

  • Hosted environments, where tenants need control over specific sites but do not have server-level administrator privileges.
  • Enterprise environments, where members of a development team may need to deploy specific sites but do not typically have server-level administrator privileges.

If you want to enable non-administrator users to deploy web packages, you need to configure the Web Deploy Handler on the target IIS web server. The other deployment approaches (the remote agent and the temp agent) don’t allow users who aren’t server administrators to deploy packages. I’ll assume that you’ve configured the Web Deployment Handler to allow a non-administrator user (FABRIKAM\User) to deploy content to a specific IIS website, as described here.

By default, the Web Deploy Handler exposes an HTTPS endpoint at the following address:

https://[server name]:8172/MSDeploy.axd

For example:

https://TESTWEB1:8172/MSDeploy.axd

However, when a non-administrator user deploys a web package to the Web Deploy Handler, they need to add the IIS website name to the endpoint address as a query string:

https://[server name]:8172/MSDeploy.axd?site=[site name]

For example:

https://TESTWEB1:8172/MSDeploy.axd?site=DemoSite

Why the difference? In a word, authorization. Your non-administrator user doesn’t have server-level access to IIS, they only have access to specific IIS websites. If they attempt to connect to the server-level endpoint, Web Deploy will an ERROR_USER_UNAUTHORIZED error. The event log on the destination server will show an IISWMSVC_AUTHORIZATION_SERVER_NOT_ALLOWED error like this:


So you’ve got to use the site query string. Now for the gotcha.

Due to an open bug in the current version of Web Deploy (2.1), you can’t specify a query string in the endpoint address if you use the .deploy.cmd file generated by Visual Studio to deploy your web package. In other words, this won’t work:

DemoProject.deploy.cmd /Y /M:https://TESTWEB1/MSDeploy.axd?site=DemoSite /U:FABRIKAM\User /P:Pa$$w0rd A/:Basic -allowUntrusted

I’ve seen some fairly bizarre “workarounds” for this—for example, drop the query string and use an administrator account—this works, but it kind of defeats the object when the whole point of the exercise was to use a non-administrator user to deploy the web package. What you need to do is to use Web Deploy (MSDeploy.exe) directly rather than running the .deploy.cmd file.

All the .deploy.cmd file contains is a bunch of parameterized Web Deploy commands. This is put together by the build process to take some of the work out of the deployment. For example, you don’t need to specify the location of the web package, the Web Deploy providers to use for the source and destination, the Web Deploy verb, or the location of the .SetParameters.xml file, because the .deploy.cmd file knows this already. However, there’s nothing to stop you using the raw Web Deploy commands directly. The easiest way to do this is to look at the output when you run the .deploy.cmd file – you’ll see the actual MSDeploy.exe commands written to the console window. You should see something like this (ignore the line breaks):

msdeploy.exe
  -source:package='…\DemoProject.zip'
  -dest:auto,
        computerName='https://TESTWEB1:8172/MSDeploy.axd?site=DemoSite',
        userName='FABRIKAM\User',
        password='Pa$$w0rd',
        authtype='Basic'
  -verb:sync
  -setParamFile:"…\DemoProject.SetParameters.xml"  
  -allowUntrusted

Run this command directly from the command line, using your non-administrator user credentials, and the deployment should succeed.

Packaging and deploying web applications is a fairly broad and complex topic, and I’ve had to gloss over many of the details in this blog post. The content we’re developing for MSDN will cover these kinds of issues in much more detail, and I’ll link to the content as soon as it’s available.

Thanks to Tom Dykstra at Microsoft for helping me troubleshoot the issue and pointing out the bug.

Deploying Databases with Object-Level Permissions

$
0
0
As I mentioned in my last post, we’re currently creating some guidance on deploying enterprise-scale applications. As we go along, I plan to blog about a few of the things that I find particularly tricky to figure out.

This time I want to look at database deployment. When you build a web application project in Visual Studio 2010, the Web Publishing Pipeline features allow you to hook into the IIS Web Deployment Tool (commonly known as “Web Deploy”) to package and optionally deploy your web application. As part of this process you can also deploy local databases to a target server environment. This is all nice and easy to configure through the project property pages in Visual Studio 2010, as shown below.

I don’t want to describe this process in any detail, you can find that elsewhere on the web (for example here).

Deploying databases in this way has advantages and disadvantages. On the plus side:
  • It’s easy.
  • It’s UI-driven.
  • It figures out most of the settings for you.
On the downside:
  • There’s no support for differential updates. In other words, the destination database is destroyed and recreated every time you deploy, so you’ll lose any data.
  • Some of the default database deployment settings are unsuitable for many real-world scenarios – this is the issue I want to focus on here.
In many cases, you’ll want to avoid the Web Deploy approach altogether and use VSDBCMD.exe to deploy and update your databases, but that’s a conversation for another day. In this post I want to focus on how you can change some of the default behaviours for database deployment using Web Deploy. In particular, Web Deploy omits object-level permissions by default. This causes problems if your database (a) contains stored procedures and (b) grants execute permissions on the stored procedures to database roles.

Suppose you’re using the Visual Studio 2010/Web Deploy approach to deploy an ASP.NET membership database from a local development machine to a destination server environment. (NB you’d typically only deploy a membership database if you’ve modified the schema, otherwise it’s easier just to run ASPNET_REGSQL.exe and create the database from scratch on the destination server). You opt for a full deployment, including schema and data from the source database. On the source database, you can see that various database roles are granted execute permissions on stored procedures:

However, when the database is recreated on the destination database server, these permissions are missing:

This can be mystifying at first—essentially, you add users to the built-in database roles such as aspnet_Membership_BasicAccess and aspnet_Membership_FullAccess, but membership of these roles has no effect. They’re basically just empty roles that aren’t mapped to any permissions.

The problem is that stored procedures are “objects” in database terms (don’t ask me, I’m not a DBA), and by default Web Deploy does not include object-level permissions when it scripts the database. To change this behaviour, you need to modify the project file for your web application project.

1. In the Solution Explorer window, right-click your web application project node, and then click Unload Project.

2. Right-click the project node again, and click Edit [project file].

3. Locate the PropertyGroup element that corresponds to your build configuration (for example Release|AnyCPU).

<PropertyGroup
    Condition=" '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ">

3. Within this element, locate the PreSource element and add a Permissions=”True” attribute, as shown below. The Permissions attribute indicates that the database script should include all permissions, including object-level permissions, which are defined in the source database.

<PublishDatabaseSettings>
  <Objects>
    <ObjectGroupName="ApplicationServices-Deployment"Order="1">
      <DestinationPath="[Destination Database Connection String]" />
      <ObjectType="dbFullSql">
        <PreSourcePath="[Source Database Connection String]"
                   ScriptSchema="True"
                   ScriptData="True"
                   Permissions="True"
                   CopyAllFullTextCatalogs="False"
                   DriDefaults="True" />
        <SourcePath="[Where to save a copy of the script]"
                Transacted="True" />
      </Object>
    </ObjectGroup>
  </Objects>
</PublishDatabaseSettings>

4. Save and close the project file.

There’s a whole host of settings you can add to the PreSource element to configure how your database is deployed. For example, if you want to deploy a database that already exists on the destination server, you need to add a ScriptDropsFirst=”True” attribute to the PreSource element – otherwise Web Deploy will complain that you’re trying to create objects that already exist. The full list of properties that you can set as PreSource attibutes can be tricky to track down unless you know how the database deployment process works:
  • Web Deploy uses the dbFullSql provider to deploy databases (the link includes some properties you can use as PreSource attributes).
  • Under the covers, the dbFullSql provider uses SQL Server Management Objects (SMO) to generate database scripts. The ScriptingOptions Properties page describes some SMO properties you can specify as PreSource attributes.

Alternatively, to get a full list of properties, you can run the following Web Deploy command:
msdeploy.exe –verb:sync –source:dbFullSql /?

There’s much more to database deployment than I can cover in a quick blog post, and we’ll cover these kinds of issues in much more detail when we publish to MSDN. For now though I hope this helps to shed some light on the intricacies of database deployment.

Running PowerShell Scripts on Remote Machines from MSBuild

$
0
0
Today's tricky topic is how to get a PowerShell script to execute on a remote machine from a custom MSBuild project file. I won't go into scenarios here, let's get straight to the point. Most of the difficulties encountered in this area revolve around handling parameters, managing paths with spaces, and escaping special characters.

Let's say we have a PowerShell script named LogDeploy.ps1 (it's trivial, but I basically want a test case that needs more than one parameter value). This contains a simple function that writes a single-line entry to a log file:
 
function LogDeployment
{
param([string]$filepath,[string]$deploydestination)
$datetime = Get-Date
$filetext = "Deployed package to " + $deploydestination + " on " + $datetime
$filetext | Out-File -filepath $filepath -Append
}
LogDeployment $args[0] $args[1]

The LogDeploy.ps1 script accepts two parameters. The first parameter represents the full path to the log file to which you want to add an entry, and the second parameter represents the deployment destination that you want to record in the log file. When you run the script and provide the required parameter values, it adds a line to the log file in the following format:

Deployed package to TESTWEB1 on 02/11/2012 09:28:18

So how do we run this script on a remote machine? You need to use the Invoke-Command cmdlet. From a PowerShell window, you'd use the following syntax:

Invoke-Command –ComputerName 'REMOTESERVER1'
               –ScriptBlock { &"C:\Path With Spaces\LogDeploy.ps1"
                               'C:\Path With Spaces\Log.txt'
                               'TESTWEB1' }

(There are various other ways of running a script using Invoke-Command, but this is the most painless approach when you need to manage parameters and deal with reserved XML characters, as you'll see shortly.)

If you wanted to run your PowerShell instructions from a regular command prompt, you'd need to invoke the PowerShell executable and provide your PowerShell commands through the -command argument:

powershell.exe –command
  "& {Invoke-Command –ComputerName 'REMOTESERVER1'
                     –ScriptBlock { &'C:\Path With Spaces\LogDeploy.ps1'
                                     'C:\Path With Spaces\Log.txt'
                                     'TESTWEB1' } "

(Again, there are other ways of invoking the script file, but from extensive trial and error this seems to be the cleanest with regards to paths with spaces, single and double quotes, and so on.)

The key points here are:
  • Wrap your command in double quotes and include an ampersand, i.e. &"your command".
  • Use single quotes preceded by an ampersand to enclose the path to your ps1 file, i.e. '&your path'.
(I'd tried many, many combinations of double quotes, single quotes, and ampersands before I arrived at this point.)

This brings us closer to the command we need to run from the MSBuild project file. However, there are a few additional considerations when you invoke this command from MSBuild. First, you should include the –NonInteractive flag to ensure the script executes quietly. Next, you should include the –ExecutionPolicy flag with an appropriate argument value. This specifies the execution policy that PowerShell will apply to your script and allows you to override the default execution policy, which may prevent execution of your script. You can choose from the following argument values:
  • A value of Unrestricted will allow PowerShell to execute your script, regardless of whether or not the script is signed.
  • A value of RemoteSigned will allow PowerShell to execute unsigned scripts that were created on the local machine. However, scripts that were created elsewhere must be signed. (In practice, you're very unlikely to have created a PowerShell script locally on a build server).
  • A value of AllSigned will allow PowerShell to execute signed scripts only.
The default execution policy is Restricted, which prevents PowerShell from running any script files.

Finally, you need to escape any reserved XML characters that occur in your PowerShell command:
  • Replace single quotes with &apos;
  • Replace double quotes with &quot;
  • Replace ampersands with &amp;
When you make these changes, your command will resemble the following:

powershell.exe -NonInteractive -executionpolicy Unrestricted
               -command &quot;&amp; Invoke-Command
                 –ComputerName &apos;REMOTESERVER1&apos;
                 -ScriptBlock { &amp;&apos;C:\Path With Spaces\LogDeploy.ps1&apos;
                                &apos; C:\Path With Spaces\Log.txt &apos; 
                                &apos;TESTWEB1&apos; } &quot;

The command is now in a format you can use from MSBuild. Within your custom MSBuild project file, you can create a new target and use the Exec task to run this command:

<TargetName="WriteLogEntry"Condition="'$(WriteLogEntry)'!='false'">
  <PropertyGroup>
    <PowerShellExeCondition="'$(PowerShellExe)'==''">
      %WINDIR%\System32\WindowsPowerShell\v1.0\powershell.exe
    </PowerShellExe>
    <ScriptLocationCondition="'$(ScriptLocation)'==''">
      C:\Path With Spaces\LogDeploy.ps1
    </ScriptLocation>
    <LogFileLocationCondition="'$(LogFileLocation)'==''">
      C:\Path With Spaces\ContactManagerDeployLog.txt
    </LogFileLocation>
  </PropertyGroup>
  <ExecCommand="$(PowerShellExe) -NonInteractive -executionpolicy Unrestricted
                 -command &quot;&amp;invoke-command -scriptblock {
                          &amp;&apos;$(ScriptLocation)&apos;
                          &apos;$(LogFileLocation)&apos; 
                          &apos;$(MSDeployComputerName)&apos;}
                          &quot;"/> 
</Target>

When you execute this target as part of your build process, PowerShell will run your script on the computer you specified in the -computername argument.

One final note - before you can use the Invoke-Command cmdlet to execute PowerShell scripts on a remote computer, you need to configure a WinRM listener to accept remote messages. You can do this by running the command winrm quickconfig on the remote computer. For more information, see Installation and Configuration for Windows Remote Management.

SharePoint: Getting Your End Users On Board

$
0
0
Everyone who implements or manages SharePoint deployments knows that the hardest part is getting your end users to engage with the solution and use it effectively. Consider the following, not uncommon, conversation:

Sales guy: "SharePoint's on the blink again."
Me: "Okay, what's the problem?"
Sales guy: "I can't open a Word doc."
Me: "Send me the link, I'll take a look."

The link comes through, and it looks something like this:

portal.example.org/sales/customers/customer name/division name/program name/quote number/document name.docx

At this point, I'll explain the problem. Basically, the sales team are hitting the file path length limit in Microsoft Office (259 characters, including the path to the temp folder on your computer). I'll remind them that they should be using content types and metadata, rather than multiple nested folders, to organize their tenders and proposals. I'll run through the benefits - it's easier to find what you're looking for, it's like creating your own dynamic folder structure using filters, it's better for search, you don't hit path length limits, and so on.

Me: "You remember the training we did on using content types and metadata?"
Sales guy: "Yeah... but it seemed easier just to create the folders."

In most cases, I end up tweaking a few folder names until the path is short enough for Office to open the file, and the problem goes away. For a few weeks.

This can be frustrating - you've implemented a platform that supports more effective ways of working and collaborating, you've delivered training on how to use it, but users persist in treating it like a basic file share because that's what they feel most comfortable with.

So how do you get around this? Recently I was sent a copy of a DVD, The Pyschology of SharePoint Adoption and Engagement (part of the SharePoint Shepherd series), by SharePoint luminary Rob Bogue. In the DVD, Rob examines the user engagement problem by looking at the social psychology behind driving change in the workplace. He draws on a truly eclectic range of academic and experiential thinking - you won't find many SharePoint resources that reference Kurt Lewin, Malcolm Gladwell, John Kotter, and Daniel Pink, amongst others - to examine how to bring users with you when you implement a SharePoint solution.

My advice would be to take a look at the DVD - it's two hours well spent. We all spend a great deal of time and effort developing our technical skills, but the soft skills examined here are just as essential if you hope to roll out a truly successful SharePoint solution.

Enterprise Deployment Tutorial Series Published

$
0
0
Microsoft's web platform includes various tools and technologies which, together, give you a whole lot of control over your web deployment and ALM scenarios. Visual Studio tooling, MSBuild, the Web Publishing Pipeline, the IIS Web Deployment Tool (Web Deploy), VSDBCMD.exe, the Web Farm Framework, and other tools can all work together to give you some really powerful and flexible deployment solutions. The problem has always been that these tools are documented individually, and if you want to figure out how to use them together then you've got your work cut out.

Until now, that is :-)

Over the past six months, I've been working with Tom Dykstra and Sayed Ibrahim Hashimi at Microsoft to create some tutorials on web deployment in the enterprise. I'm pleased to announce that we've now published the series, Deploying Web Applications in Enterprise Scenarios, to the www.asp.net website.

The tutorial series is huge - over 60,000 words - and covers many different aspects of enterprise-scale web deployment. Throughout the authoring process, our aim was to provide holistic, end-to-end guidance on how to meet common deployment requirements, rather than simply focusing in on individual tools. Some of my personal highlights are:


Do take a look, and I hope you find the tutorials valuable.

Problems Viewing Health Reports in SharePoint 2013

$
0
0
I've been faced with an interesting problem over the last few days when working with the SharePoint 2013 RTM build. I'm using SharePoint 2013 RTM on Windows Server 2012 and SQL Server 2012 RTM on Windows Server 2012. I configure usage and health data collection in Central Admin using default settings. I click View health reports. I specify some criteria under Slowest Pages, and click Go. I then get presented with the following error message:

Sorry, something went wrong
You can only specify the READPAST lock in the READ COMMITTED or REPEATABLE READ isolation levels.

This took quite a bit of troubleshooting. When you click Go on the Health Reports page, SharePoint calls a stored procedure named proc_GetSlowestPages in the WSS_Logging database. After spending some time messing around with SQL Server Profiler, we established beyond doubt that the call to the stored procedure is using the default READ COMMITTED transaction isolation level. The problem lies in a conflict between the proc_GetSlowestPages stored procedure and the database view that it selects data from.

The proc_GetSlowestPages stored procedure looks like this:

SETNOCOUNTON
SELECTTOP(@MaxRows)
   ServerUrl+
   CASE  ISNULL(SiteUrl,'')+ISNULL(WebUrl,'')
      WHEN'/'THEN''ELSEISNULL(SiteUrl,'')+ISNULL(WebUrl,'')
   END
   +ISNULL(DocumentPath,'')
   +ISNULL(QueryString,'')ASUrl,
   CONVERT(float,AVG(Duration))/1000 ASAverageDuration,
   CONVERT(float,MAX(Duration))/1000 ASMaximumDuration,
   CONVERT(float,MIN(Duration))/1000 ASMinimumDuration,
   AVG(QueryCount)ASAverageQueryCount,
   MAX(QueryCount)ASMaximumQueryCount,
   MIN(QueryCount)ASMinimumQueryCount,
   COUNT(*)ASTotalPageHits
FROMdbo.RequestUsage
WITH (READPAST)
WHEREPartitionIdin(SELECTPartitionIdfromdbo.fn_PartitionIdRangeMonthly(@StartTime, @EndTime))
AND  LogTimeBETWEEN@StartTimeAND@EndTime
AND(@WebApplicationIdISNULLOR  WebApplicationId=@WebApplicationId)
AND(@MachineNameISNULLorMachineName=@MachineName)
GROUPBYServerUrl,SiteUrl,WebUrl,DocumentPath,QueryString
ORDERBYAVG(duration)DESC

Notice that the SELECT statement queries dbo.RequestUsage, which is a database view. It uses the READPAST hint, which essentially tells the query engine to skip any locked rows.

The RequestUsage view looks like this:

SELECT*FROM[dbo].[RequestUsage_Partition0]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition1]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition2]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition3]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition4]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition5]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition6]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition7]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition8]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition9]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition10]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition11]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition12]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition13]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition14]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition15]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition16]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition17]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition18]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition19]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition20]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition21]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition22]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition23]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition24]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition25]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition26]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition27]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition28]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition29]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition30]with (NOLOCK)UNIONALL 
SELECT*FROM[dbo].[RequestUsage_Partition31]

Notice that the view uses a whole bunch of NOLOCK hints. These essentially tell the query engine to ignore any locks. This is the source of the problem: you cannot use NOLOCK and READPAST in the same query as they basically contradict each other. Although the transaction isolation level is READ COMMITTED, the use of the NOLOCK hints means it behaves like a READ UNCOMMITTED isolation level.

As far as I can see, this is a bug in SharePoint 2013 RTM, which creates the usage database (named WSS_Logging by default) when you first configure usage and health data collection. I'd guess Microsoft will address it with a patch in the near future. I managed to work around it by altering the proc_GetSlowestPages stored procedure and commenting out the WITH (READPAST) line.

Thanks to Geoff Allix and Graeme Malcolm for the SQL Server tips, it's been an education :-)

Enforcing Site Policy Selection in SharePoint 2013

$
0
0
One of the new features in SharePoint 2013 is the ability to create and publish site policies. Essentially, a site policy defines when a site should be closed and when it should be deleted, together with any reminders, workflows, and so on that you want to associate with the process. You create site policies at the site collection level, and you can then publish them through a Managed Metadata Service application to make them available on other site collections.
One of the big selling points of site policies is that you can use them in conjunction with self-service site creation. The basic idea is that you allow people to create their own sites, but mitigate the associated risk of site proliferation by forcing them to select an appropriate site closure and deletion policy as part of the site creation process. This is all well-documented elsewhere, so I don't plan to go into it here. Instead, I want to focus on a couple of specific issues that stumped us for a couple of hours.

Issue 1: The link on the Self-Service Site Creation Management dialog is a red herring

When you configure self-service site creation, you'll see a dialog like this:


 
Notice the link at the top of the page:
Users can create their own Site Collections from: http://.../_layouts/15/scsignup.aspx

This is a red herring. If you use this link, you'll get a site collection creation dialog of sorts, but there won't be any option to select a site policy - even though you've set Site Classification Settings to A required choice. Instead, you need to use the following site-relative URL:
/_layouts/15/selfservicecreate.aspx

If you use this URL, you'll be presented with a page that forces you to select from a list of your published site policies:
 
 
Issue 2: You must explicitly specify the site creation link
 
If you've read up on configuring self-service site creation, you'll know that the general idea is that users should create site collections from the Sites page on their My Site. If you've enabled self-service site creation on the web application that hosts your My Sites, you'll see a new site link at the top of the Sites page:
 
When you click this link, SharePoint launches the selfservicecreate.aspx page as a dialog. However, unless you have explictly specified the link to the pagein the self-service site creation settings, Sharepoint will display a default version of the page - in other words, it will ignore the site classification settings you've configured. When you configure self-service site creation for the My Sites web application, under Start a Site, you must select Display the custom form at and then specify the link to the selfservicecreate.aspx page:
 
 
Now, when you click the new site link, you should see the correct version of the dialog that forces you to select an appropriate policy template for the new site collection:
 
 
Hope that saves a headache or two.

We don't know what happened, but something went wrong. Could you please try that again?

$
0
0
I recently ran into an issue I hadn't seen before when configuring Excel Services on SharePoint 2013 RTM. I'd performed all the configuration steps described on TechNet. However, when I tried to browse to a workbook, I got the following error:















I checked the event logs and the trace logs, and the root of the problem was an Excel Services Application error with event ID 5226: Unable to create or access workbook cache.


This might seem like a straightforward permission issue. However, what also tends to happen in this situation is that the IIS application pool will stop, and this error gets buried under many more generic errors. (I've seen event IDs 5231, 5239 and 5240, for which the official advice is to restart the server. Obviously in this case that isn't much help.)

The fix is straightforward - change the permissions on the %WINDIR%\Temp folder. As a managed account, the Excel Services application pool is a member of the local WSS_WPG security group, which has read and execute permissions on the Temp folder. Add the modify permission, recycle the application pool, and everything should work properly.

























You could of course grant permissions on the Temp folder to the individual application pool account - in this case, I opted to grant permissions to the WSS_WPG group in case other managed accounts need to create temporary cache files.

The target principal name is incorrect. Cannot generate SSPI context.

$
0
0
Today's problem occured after I restarted a Hyper-V based SharePoint 2013 farm (Windows Server 2012, one SharePoint 2013 machine, one SQL Server 2012 machine, one DC). I fired up Central Administration and was hit with the following error:

Unknown SQL Exception 0 occurred. Additional error information from SQL Server is included below.

The target principal name is incorrect. Cannot generate SSPI context.

After checking the obvious things - testing connectivity to the DB server, checking the SQL service was running, verifying permissions, etc - I initially figured this was an issue with my Hyper-V snapshots being out of sync, so I ran the SharePoint Products Configuration Wizard. This hit me with the following error:

Failed to detect if this server is joined to a server farm. Possible reasons for this failure could be that you no longer have appropriate permissions to the server farm, the database server hosting the server farm is unresponsive, the configuration database is inaccessible or this server has been removed from the server farm.

I attempted to rejoin the server farm to no avail, then I realised I was barking up the wrong tree. The initial error message suggests a Kerberos issue, while my farm is set up to use NTLM. After a lot of searching, this ancient forum thread pointed me in the right direction. In Active Directory, I opened the computer record for the DB server. In the attribute list, the servicePrincipalName attribute showed the following entries:

























Initially I tried deleting just the MSSQLSvc entries, as suggested by the forum thread, but to no avail. So I deleted the whole lot. With no SPNs, authentication falls back to NTLM as it should and the farm comes back to life.

Update: I'm fairly certain that this issue arose when I added Analysis Services to the SQL Server instance on the database server.

The Search Dictionaries term store group does not exist

$
0
0
Today's SharePoint 2013 configuration quagmire is this: you provision the Search service application, you click on Search Dictionaries - for example, to manage company name extraction or query spelling correction - and you see a largely empty term store.

In other words, you click this:

























You expect to see this:

























But instead you see this:















As is often the case with these issues, you'll probably that everything works fine if you use the Farm Configuration Wizard to provision your services. However, if you do things properly and create your service applications by hand, you come across this kind of issue.

The Solution


The Search Dictionaries term set group is not created when you provision the Search service application, it's created when you provision the Managed Metadata service application. When you create a Managed Metadata service application, it should create the Search Dictionaries and the People term set groups in addition to the System group. However, this will only happen if you have provisioned the State service application before you provision the Managed Metadata service application.

You can find details on how to configure the State service here (SharePoint 2010 article, but the steps are the same). For convenience, for a basic deployment, you can provision a State service application by running the following PowerShell cmdlets:

$state = New-SPStateServiceApplication -Name "Contoso State Service"
New-SPStateServiceDatabase -Name "ContosoStateDB" -ServiceApplication $state
New-SPStateServiceApplicationProxy -Name "Contoso State Service" -ServiceApplication $state -DefaultProxyGroup

So, in summary, configure your service applications in the following order:
  1. Provision a State service application.
  2. Provision a Managed Metadata service application.
  3. Configure usage and health data collection.
  4. Ensure the usage and health data collection proxy is started.
  5. Provision the Search service.
Everything should then work fine.

Note: steps 3 and 4 aren't necessary to create the Search Dictionaries group, but the Search service will report errors if the usage and health data collection service app isn't set up. When you configure usage and health data collection manually, rather than using the Farm Configuration Wizard, you'll typically find that the service application proxy is in the Stopped state. For a good explanation on how to start it, see http://tristanwatkins.com/fixing-the-usage-and-health-data-collection-sa/.
Viewing all 56 articles
Browse latest View live