Friday, October 22, 2010

Productivity Power Tools for Visual Studio 2010

Productivity Power Tools is a decent extension for Visual Studio 2010 that combines Solution Explorer and Class View.

Thursday, October 14, 2010

Resharper Aids With Localizing Code

Resharper 5 (Jetbrains.com) recognizes embedded strings and moves them to resources for you. It does a great job of telling the difference between user-visible string, which you probably want to localize from diagnostics and logs that you probably do not.

Monday, October 4, 2010

Looking At Thunderbird and Outlook as Gmail Clients

We compared Outlook 2010 + Google Apps Sync to Thunderbird 3.1.4 + Gmail Conversation View 1.2.4 + Google Contacts 0.6.33 + Lightening 1.0b2 + Provider for Google Calendar 0.7.1.

Both give you synchronized Gmail, Contacts and Cal in Outlook and Thunderbird respectively. The Thunderbird combo is better for a bunch of reasons including:

  • Synching is almost instantaneous over broadband. Google Apps Sync for Outlook is painfully slow. If you get a lot of email it can easily take a couple of minutes waiting for the Google service to let you read new email - unless you leave Outlook running 24/7. That's a Google problem.
  • MUCH better threaded view in the "reading pane". That's an Outlook problem.
  • Thunderbird gives you more control over IMAP folder subscriptions if you care.

Of course none of this applies to you if you don't mind using a web page to use Gmail. I find using a desktop client like Thunderbird gives me a nice side-by-side treatment of the inbox contents and the conversation thread in a reading pane. The web UI makes you scroll up and down.

Google combines mail, cal and contacts synch into one plug-in, Google Apps Synch for Outlook. Using Thunderbird you have to install separate add-ons for these functions.

Wednesday, September 22, 2010

Subversion hosting at Assembla

Assembla is a nice hosting service. They offer free source-control-only accounts and upgrade for various development tools. Creating an account and an SVN repo is fast and easy.

Tuesday, September 7, 2010

Thursday, August 26, 2010

Using SVN and Go With Visual Studio For Continuous Integration

Combining Subversion and Go provides a very lightweight and low-cost alternative to Team Foundation Server for continuous integration with Microsoft Visual Studio. Setting this up is easy and takes only a few minutes.

Requirements:

TortoiseSVN integrates elegantly with Windows Explorer. It’s free. We combine this with VisualSVN, a very nice extension for Visual Studio, but this is not required.

Go from ThoughtWorks Studios. The Community Edition is free.

Setting all of this up is straight forward. Just follow the documentation for each. You’re safe with default options across the board.

You have various options for building a Visual Studio solution from Go. The easiest approach is to install Visual Studio on the build machine and configure the Go Pipeline to call devenv.exe using the “Exec” build option in Go. Wrapping the command line execution of devenv in a bat file is advised. For example:

Let’s say you have a solution called MySolution.sln. Create a bat file and call it MyBuild.bat. Put this in the bat file:

"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" -clean
"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" C:\Users\me\src\project\MySolution.sln -Build "Release|Any CPU"


A basic Go Pipeline to build this looks like this:

<pipeline name="BuildBlueDiamond">
  <materials>
    <svn url=http://frodo:81/svn/project
username="me" password="me" />
  </materials>
  <stage name="defaultStage">
    <jobs>
      <job name="defaultJob">
        <tasks>
          <exec command="C:\Users\me\src\project\MyBuild.bat" />
        </tasks>
      </job>
    </jobs>
  </stage>
</pipeline>

Thursday, July 1, 2010

Tuning The Performance Of SharePoint 2010

This article peaked our interest.

Highlights:

1. SharePoint doesn’t like big iron.

After looking at our test results as well as collecting their own data, Microsoft SharePoint® Support indicated that SharePoint® was apparently unable to make use of such large hardware (8 processors with 16G of RAM). In an effort to validate that the problem was indeed caused by the large hardware, they recommended that we reduce the number of processors to 4, and then later suggested reducing it to 2. In each case, this resulted in a surprising performance improvement but the stalling behavior remained.

2. Tempdb database contention in SQL Server.

After additional testing and data gathering, Microsoft Support engineers found that contention on tempdb allocations within SQL Server was causing delays processing queries from SharePoint®. This problem is described in the Microsoft Knowledge Base (#328551).

The fix required creating additional tempdb databases within SQL Server (one for each processor) and enabling a startup parameter (-T1118) that instructed SQL Server to use a round-robin tempdb allocation strategy. This change reduced resource allocation contention in the tempdb database, improving performance on complex queries.

3. Single-threaded cache access in a multi-processor system.

…problems with the size of the TokenAndPermUserStore cache in SQL Server. When the server has a large amount of physical memory (in this case 32G) and the rate of random dynamic queries is high, the number of entries in this cache grows rapidly. As the cache grows, the time required to traverse and cleanup the cache can be substantial. Because access to this cache is single-threaded, queries can pile up behind each other waiting for the cleanup to complete. This queuing slows performance and prevents a multi-processor system from scaling as expected. The remedy was to start SQL Server with a “-T4618” parameter, which limits the TokenAndPermUserStore cache size. (This was not one of the solutions listed in the Microsoft Knowledge Base for this issue – it was provided by a Microsoft Support Engineer).

Tuesday, June 22, 2010

Distraction Factor and Agile


In my last assignment I led a couple of teams using SrumForTeamsystem. We followed the process guidance religiously. We had the teams located in open bays with full line of sight to each member. We thought we saluted and followed the spirit of agile pretty closely. We were productive - much more so than before we switched to an agile format - and much more predictable in release punctuality. But, then we stopped improving after about a year and plateaued. I wondered why.
Last January, I got everyone (20 people or so) into a room for lunch and a rap session. I planned on an hour and we went two and a half. I wanted to know "How is it working? Are you happy with how we do things? Are you happy with our pace? What can we do better?" Far and away the biggest complaint was the interruption factor. Time and again people complained that the open office format was too distracting during coding and construction when distractions are particularly harmful to productivity and quality. 
The emphasis there is important. I believe there are times when communal living is hurtful to forward progress in software development and actually harmful. Yes indeed, private offices are a good thing when it's time to get "in the zone" and code. Elaboration is done. Design is done. Implementation approaches have been hashed and rehashed. Now it's time to lay bricks. Coding is generally not a community project.
What's that sound? I think I can hear the "agile community" writ large howling from the rooftops. The XP people are seething. Nevertheless, I'm convinced controlling the distraction factor is something we in the agile community need to recognize as a real problem. Sometimes interruptions are best left until later.
How do we deal with it? My team had a couple of ideas. One was that people simply hang a "Do Not Disturb" sign for all to see. By the time you say "not now please" it's too late. You've been interrupted. Another was to separate a "quiet area" in the office just for uninterrupted work. Working from home is also a good isolation tool for the right people at the right time.
A high level conclusion I drew from this feedback was to remember that the team needs to feel comfortable. If half of them are fighting the environment then it's something to fix. One of my roles as a leader is to tear down the obstacles inhibiting my team. I'm completely comfortable doing things outside the lines of "the book" if the team wants it that way and produces more that way.
Cross-posted here.

Sunday, June 20, 2010

Mingle With Postgres on Ubuntu 10.0.4

We’re evaluating Mingle, an agile project management solution from ThoughtWorks. ThoughtWorks offers versions of Mingle for both Windows and Linux. It’s integrated with either Oracle or Postgres databases.  We recommend using Linux and Postgres.  ThoughtWorks also provides canned installations of Mingle and Postgres on VMWare virtual machines for download. Our style is to get the full experience, so starting from a pre-installed VM isn’t for us.

After struggling to get Mingle working with Postgres and Oracle on Widnows 7 we were told by Mingle’s support team that only Windows XP and Windows Server 2003 are supported by Mingle at this time.

Since we’ve been diving into Linux more aggressively the past several months we decided to bring up an Ubuntu Lucid Lynx (10.0.4) virtual machine using VirtualBox and start there.  Installation of Mingle is pretty straight forward. Begin with Postgres, create a mingle user with DBA privileges, create an empty database for Mingle, unpack the Mingle tar ball and follow the installation instructions.

Update 29 June 2010

We have Mingle 3.1.1 running on Windows 7 x64 Ultimate with Postgres SQL 8.4. Following the defaults works fine. Install Postgres first, create an account called mingle with the create databases role. Create a database called mingle. You can call the user and the database anything you like. Finally install Mingle.

Wednesday, June 2, 2010

Installing SharePoint On Windows 7

Microsoft supports installing SharePoint Foundation 2010 on Windows 7. This is quite useful if you want to do custom development for SharePoint in Visual Studio 2010, which requires the SharePoint server to be installed on the same machine as the development environment. Prior to the 2010 release local SharePoint development required running Windows Server as your desktop OS or running a VM.

We installed SharePoint Foundation 2010 on a 64-bit Windows 7 desktop box today. The process went smoothly. If you develop for SharePoint we recommend Windows 7 as a solid platform from which to build.

Microsoft’s official article on this is here. There’s a comment on the thread that elaborates a bit on the experience here. Click on the date-time stamp to expand the comment text.

We’re happy to answer questions in comments in this space.

Tuesday, June 1, 2010

Windows Startup Analysis

RunAnalyzer by Safer Networking is a great tool for digging in to tune Windows performance. It gives you a deep look at everything in the system that initiates a service or task tray program on Windows launch. This tool is not for the meek, because you can easily manipulate the Windows registry, although registry access is structured just for things related to starting Windows. Here are a couple of screen shots.

image

image

Wednesday, May 26, 2010

Installing Plugins In Eclipse on Ubuntu 10.04

You get this when installing a plugin to Eclipse on Ubuntu 10.04:

An error occurred while installing the items session context was (profile=PlatformProfile, phase=org.eclipse.equinox.internal.provisional.p2.engine.phases.Install, operand=null --> [R]org.eclipse.cvs 1.0.400.v201002111343,  action=org.eclipse.equinox.internal.p2.touchpoint.eclipse.actions.InstallBundleAction).
  The artifact file for osgi.bundle,org.eclipse.cvs,1.0.400.v201002111343 was not found.

We got this when installing the Aptana Studio plugin. The fix is to install PDE. On the command line run this:

sudo apt-get install eclipse-pde

Friday, May 7, 2010

Ubuntu Linux Release 10.04 Shines

We're running the final production bits for Ubuntu 10.04 Linux in a VirtualBox virtual machine with 1GB of memory on a Windows 7 host with 4GB of RAM on the hardware. Deliciously zippy and stable.

We advise setting stepping the default font size for OS down to 8 from its default of 10 to get much more efficient use of screen real estate. It's easy to do this using System -> Appearance -> Fonts.

Tuesday, May 4, 2010

TFS 2010 With MOSS 2007

Is supposed to work. I have not tried it though. Blog post on how to do it is here.

Thursday, April 22, 2010

Just Enough or Pack Rat?

Software craftsmanship is like life. Some people plan their lives to the hilt, load up with contingency plans, ready for anything. They save things for years – packed into closets, files and attics – just in case “I need them”. Let’s call these folks “pack rats”. Others take each day as it arrives, flexible in spirit and time to new opportunities without worrying about changing tons of plans and rearranging the calendar.

A full SDLC development process attracts pack rats. They plan and plan, establish contingency plans and stack the process so they don’t make a mistake. Along the way they pay huge “carrying costs” of keeping an inventories of plans, processes, documents, meetings and calendars. Their users are frustrated by not much delivered in lots of time.
Agile methods are the “just enough” approach. We do as much as we need to deliver great, usable software that delights our users and no more. We’re constantly open to change – able to turn on a dime, because we don’t have a fifty-page plan to revise and four levels of approval to obtain. Our users are thrilled because we spend our time focused on their needs rather than on “process”. We deliver maximum bang for the buck where bang is defined as usable software solving real problems. We have way more fun!

Are you a “just enough” craftsman or a “pack rat”?

Technorati Tags: ,,

TFS In The Cloud

Our work bringing up TFS 2010 on the Amazon EC2 cloud is bearing fruit. It’s been a bear dealing with EC2’s stubborn dynamic IP addresses; you get fresh ones each time you boot an instance. This, well, sends DNS on Windows Domains into a tizzy. We’ve solved it for now with a workaround. We’re thrilled about some secret sauce we’ve designed for running very large domain networks on EC2.  We’ve had a system up and under test for about six weeks. It is hyper-fast!

Interested in giving it a try? Ping us.

SQL Server 2008 @ Amazon EC2 “Event 17508 File not found”

(Cross-posted here from an 18 March 2010 post on my personal blog at markr.com.)

Ran into an interesting little glitch this afternoon bringing up SQL Server 2008 at Amazon EC2 using one of their packaged instances. I got this error:

Log Name: Application
Source: MSSQLSERVER
Date: 3/17/2010 10:00:14 PM
Event ID: 17058
Task Category: Server
Level: Error
Keywords: Classic
User: N/A
Computer: ip- xxxxxxxx        
Description:
initerrlog: Could not open error log file ''. Operating system error = 3(The system cannot find the path specified.).
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
<System>
<Provider Name="MSSQLSERVER" />
<EventID Qualifiers="49152">17058</EventID>
<Level>2</Level>
<Task>2</Task>
<Keywords>0x80000000000000</Keywords>
<TimeCreated SystemTime="2010-03-17T22:00:14.000Z" />
<EventRecordID>1583</EventRecordID>
<Channel>Application</Channel>
<Computer>ip-xxxxx</Computer>
<Security />
</System>
<EventData>
<Data>
</Data>
<Data>3(The system cannot find the path specified.)</Data>
<Binary>A2420000100000000C000000490050002D0030004100460030003300390038004300000000000000</Binary>
</EventData>
</Event>

After a little digging – actually a lot of digging – I discovered this article that says you can get this if the SQL Server machine is also a Windows domain controller. I applied both Workaround 1 and 2 to ALL of the SQLServer* security groups on the domain.

TFS Check-in Policy for Exactly One Work Item

Enforcing work item associations on check-in is vital for assuring traceability. Here’s a handy TFS policy for forcing check-ins to be associated with one and only one work-item.

http://blog.accentient.com/2009/12/15/CustomCheckinPolicyForExactlyOneWorkItem.aspx

Using Subversion With Visual Studio

There’s a nifty tool called VisualSVN, which comes with a separately usable server (free) and VS client plug-in ($49). You can get by with the server and use the included management console plug-in to handle check-in, check-out, branching and merging. The client plug-in delivers nice integration into Visual Studio’s solution explorer and a full-service menu to boot. The server includes Subversion 1.6.9.

If you are switching from TFS to SVN you’ll want to make sure and unbind your solution and component projects from TFS using File –> Source Control –> Change Source Control. If you don’t do this you risk getting confused results, because the normal TFS sub-menu items like “check-in”, “get latest version”, etc. do not map to VisualSVN. SVN is a separate collection of items toward the bottom of the sub-menu for elements in Solution Explorer. Unbinding removes TFS from the Solution Explorer sub-menu.

Myth of Optimization Through Decomposition

This hit me like a freight train when I read it.

In Alan Shalloway's Lean Online Training, we're learning about the Myth of Optimization Through Decomposition, which states that trying to go faster by optimizing each individual piece does not speed up the system.

In the physical world of manufacturing, attempting to run every single machine at 100% utilization results in large piles of unfinished product just sitting around waiting to get through the next step of the pipeline or for a buyer. These unfinished products incur significant costs in terms of inventory and storage. And, whenever the product line is changed or stopped, whatever is sitting in that pipeline winds up being thrown away. This is why physical operations do best when they use a Just In Time strategy -- creating only what they need and no more. It turns out that operating each machine at 100% utilization is actually a really bad business decision.

In the world of software development, the parallel to running every machine at 100% utilization is making sure every employee is busy 100% of the time. And, just like in the physical world, this results in large amounts of unfinished works in progress that incur significant costs and risks. Knowledge degrades quickly, requirements get out of date, the feedback loop is delayed so we don’t learn what we’re doing wrong. The result is unfinished, untested, misunderstood, and often flat-out unnecessary code bogging down our product, degrading its quality, and, actually slowing us down.

  • It's difficult implementing a feature that was specified so long ago that no one can remember what it's for.
  • It's hard tracking down an error in code developed so long ago that no one remembers how it was implemented.
  • It's slow adding new features when the software is muddled with unfinished, untested code (that isn't even needed!).

Thus, Lean teaches us that striving for 100% utilization is not the answer. It doesn't get the product completed any more quickly, and the only thing it creates is waste.

The only way to go faster is the optimize the whole. In other words, find your bottlenecks -- the things that are slowing down the process, incurring delays, and adding waste -- and remove those. And when you do, a funny thing happens, it lets your developers work faster! They're happier, you're happier, and ultimately the customer is happier.

From my own experience I offer some indicators that reveal the truth of this:

It's difficult implementing a feature that was specified so long ago that no one can remember what it's for.

Imagine managing development for 3-4 major products and shared infrastructure, each of which has a product backlog from dozens to over one hundred things! Imagine product owners that want to estimate everything they can imagine in a product over N releases up-front, “so we can inform the contents of each release partially based on how big things are.”

In my experience with, say, e-commerce web applications, writing user stories of any reasonable fidelity, it’s unusual to pack more than ten things into any release. More than that requires too much time for the release or too many people to get the job done in a reasonable time. When you’re done it’s likely that ten more things have appeared that are at least as important to the business as the next ten in the backlog. A backlog longer than ten is waste in this situation.

I need to say something about story fidelity. When I see a backlog of dozens or hundreds I usually see very finely-tuned stories. In my experience, a story that doesn’t stand on its own when implemented in a product is too fine-grained for a product backlog. A story needs to describe a complete picture so that when someone looks at a story two months later, someone unfamiliar with the backlog reads a story, they can quickly and easily understand the feature.

Now, I realize I have made a nasty generalization with my ten-item-backlog example. The point is that a backlog of 100 is pretty darn difficult to prioritize and manage. The backlog simply becomes a list of things someone one day thought were needed. Estimating it is waste. Prioritizing it is probably impossible.

Finally, as a development manager you need to resist aggressive product owners who try and pack as much as they can onto your agenda. The belief is that if every available hour of every resource is planned that we are working at maximum efficiency. Wrong. Dense-packing software development teams like that guarantees lots of overtime or missed deadlines or both. If you schedule everyone to the limit you have no “surge capacity”. Without surge capacity you’re dead; you are forced into nights and weekends and you have grumpy people.

It's hard tracking down an error in code developed so long ago that no one remembers how it was implemented.

I have a rule: Whenever you work on old code you always refactor it to leave it better than you found it. If you’re a good programmer do you ever remember working on old code you couldn’t improve? I don’t.

It's slow adding new features when the software is muddled with unfinished, untested code (that isn't even needed!).

I recently helped my company with some technical due diligence evaluating the acquisition of another company and its software. Company B uses a development process in which they release their product religiously every X weeks pretty much regardless of whether new features are completely finished. They have conditioned their user community to expect partially completed or incompletely tested stuff; indeed they say their users enjoy being treated to “sneak preview” features and Company B uses feedback to improve these features before they are completely done. As a business model this works for them and that is wonderful.

I argue that care needs to be taken with this style of development. It is easy to get distracted and start adding new things without finishing old things leaving code littered with partially completed work. Clever branching might help mitigate this problem, but doing so adds complexity to the development process nevertheless.

How to: Move Your Team Foundation Server from One Hardware Configuration to Another

There are several reasons you may want to move TFS from one platform to another.

  • Expansion and growth. Your existing single-server implementation is creaking and sputtering – on old hardware to boot. You’ve got everything on one server.
  • You’re upgrading hardware.
  • Your system has failed and you need to stand up a new one fast.
  • You want to separate a bunch of apps like SharePoint, TFS and SQL Server onto different servers.

The first thing you need to do is read this article about ten times. You need to really understand what it says. I used it to move TFS from a physical server running TFS 2008, SharePoint 2007, Project Server 2007, Team System Web Access and Scrum for Team System to a virtual server last month. If you follow this article you should not have trouble.

The second thing you need to do is reserve a full day to pull this off. Take your time. Check off your progress through each step. Go for a walk every hour or two and collect yourself.

Finally, here’s a tip from my experience. There’s a database on TFS called TfsIntegration. Opening its tbl_service_interface table  in SQL Server Management Studio reveals contents that look like this:

image

On a properly configured system all instances of “mattie” in the above will be the name of your server. If you are using DNS then your domain name should be there.

In the same database look inside tbl_Registration_extended_attributes for the following.

image

 

 

Here you want to use the network name of your server, not the domain.

AFTER you complete your migration and test the system backup the TfsIntegration database on the new server. You’ll want to restore it after you make a fresh restoration of the old databases before you “go live” on the new system.