I recently took on a Windows Universal project. As with any multiple device project one of the goals is to share as much as possible to avoid writing the same code twice.

The is no conditional compile in XAML, so that means that separate XAML files are needed in the cases where complete sharing is not possible. Luckily the Windows Universal project structure is set up so that all you need to do to share a XAML file is move the file into the Shared folder/project.

Styles are the appropriate way of providing a consistent styling across multiple pages /sections of the application, so it makes sense to try and place those in a common area. However, there will be some styles which will be specific to the Windows or Windows Phone projects. The tricky part is finding a way to share the bulk of the style, except for those pieces which are specific.

My first thought was to have a SharedStyles.xaml in the Shared folder, and a PlatformSpecificStyles.xaml in each Windows and WindowsPhone directory. Then in the App.xaml include first the shared files followed by the specific files. Something like this:

ResourceSharing.Shared\SharedStyles.xaml

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <Style TargetType="HubSection" x:Key="HubSectionStyle">
  <Setter Property="Background" Value="Pink" />
 </Style>
</ResourceDictionary>

ResourceSharing.Windows\PlatformSpecificStyles.xaml

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Purple" />
 </Style>
</ResourceDictionary>

ResourceSharing.WindowsPhone\PlatformSpecificStyles.xaml

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Blue" />
 </Style>
</ResourceDictionary>

ResourceSharing.Shared\App.xaml

<Application
    x:Class="ResourceSharing.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
 <Application.Resources>
  <ResourceDictionary>
   <ResourceDictionary.MergedDictionaries>
    <ResourceDictionary Source="SharedStyles.xaml" />
    <ResourceDictionary Source="PlatformSpecificStyles.xaml" />
   </ResourceDictionary.MergedDictionaries>
  </ResourceDictionary>
 </Application.Resources>
</Application>

However, it turns out that doesn’t work. In order to ResourceDictionary A to reference a resource from ResourceDictionary B, the ResourceDictionary A needs to include the ResourceDictionary B itself. So the end result ended up looking like this:

ResourceSharing.Shared\SharedStyles.xaml
unchanged

ResourceSharing.Windows\Styles.xaml (renamed from PlatformSpecificStyles.xaml)

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <ResourceDictionary.MergedDictionaries>
  <ResourceDictionary Source="SharedStyles.xaml" />
 </ResourceDictionary.MergedDictionaries>
 
 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Purple" />
 </Style>
</ResourceDictionary>

ResourceSharing.WindowsPhone\Styles.xaml (renamed from PlatformSpecificStyles.xaml)

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <ResourceDictionary.MergedDictionaries>
  <ResourceDictionary Source="SharedStyles.xaml" />
 </ResourceDictionary.MergedDictionaries>
 
 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Blue" />
 </Style>
</ResourceDictionary>

ResourceSharing.Shared\App.xaml

<Application
    x:Class="ResourceSharingHubApp.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
 <Application.Resources>
  <ResourceDictionary>
   <ResourceDictionary.MergedDictionaries>
    <ResourceDictionary Source="Styles.xaml" /> <!-- both Shared and PlatformSpecific -->
   </ResourceDictionary.MergedDictionaries>
  </ResourceDictionary>
 </Application.Resources>
</Application>

Hope that helps

This blog was cross posted on the Crafting Bytes blog at Resource Sharing in Windows Universal Apps

DbDeploy (and its .NET counterparts) work OK when everyone is working on or checking into a single master branch.  Here is an example of the normal use case, so that we can compare to the more complicated use cases that follow.

DeveloperA needs to make a change to the database and they see the delta scripts are up to 77, so they make delta script 78 and continue testing the code that they wrote to work with those database changes.  Meanwhile DeveloperB also sees that the scripts are up to 77, and so they make a script 78 and start testing with their code changes.  Let’s say that DeveloperA finishes first and checks in.  DeveloperB goes to check-in, sees they are out of date, pulls the latest, runs the unit tests again, and blamo! – a failure (two script 78s).  At this point are faced with an annoyance that can be worked around.  They need to rollback their script 78, run the other script 78, rename their script to 79, then re-run the unit tests and check in.

Let’s take the same scenario and use date based numbering or timestamping.  The last delta script checked in on master is 140901.1047.  Notice I have switched to using decimals as the script numbers, with the number being yyMMdd.hhmm.  Developer A wants to make script change 140907.1532 and DeveloperB wants to make 140908.0854.  When DeveloperB goes to check in, pulls the latest, and runs the unit tests.  At this point the tool could roll back 140908.0854 and apply 140907.1532, then re-apply 140908.0854.  Or if you are “feeling lucky” then the tool could just “fill in the hole” and apply 140907.1532, leaving the other script alone.  Determination of whether or not to rollback could be made by whether there are undo scripts for all of the scripts that need to be rolled back.  If there are, use the rollback, if not, then just apply the missing script.

The problem gets much more complicated when there are multiple developers working on multiple feature branches.  This is more like the Git branching model.  In this scenario let’s say there are two teams, TeamA and TeamB.  Each of the teams develops a set of scripts to support their particular feature. 

Let’s say TeamA develops:
140903.1242
140909.1117
140909.1734

And TeamB develops
140904.1512
140905.0933
140911.1802

Assuming TeamA checks into master first, when TeamB gets latest they merge in TeamA’s scripts.  *This time*, after TeamB checks in, TeamA will *also* need to merge in TeamB’s scripts.  However, both teams should end up with a database that has the correct scripts inside it. The possibility exists that one teams scripts will force another teams scripts to change.  Let’s say TeamA’s script on 140903.1242 renames TableX.ColumnN to be ColumnA, and TeamB’s script on 140904.1512 uses ColumnN (in a new View that they have created for example). When TeamB gets latest and tries to run the unit tests, blamo! – error in script.  If we were “filling in the holes” it would actually be 140903.1242 that caused the error, and if we rolled back it would be 140904.1512 that would cause the error.  The point is one or more scripts that have already been applied may need to change to support the incompatibility. 

Timestamping doesn’t solve everything, but it comes pretty close. One use case that isn’t supported by timestamping is solved by hashing. Let’s take the case of a single developer or team working on their own machine, trying to figure out the right way to make a script change.  They may try the first version of a script that uses column with an integer type, but they realize that it must also allow NULLs.  They *could* create scripts to support every little change that they make, but that feels a little cumbersome, verbose, and confusing to humans trying to follow the chain of scripts.  It would be nice if the tool helped in this scenario.  So in the case of a single developer they change the script that has already been run (in fact may not even be the last script that was run) to allow NULLs.  Then the tool, sees that the script changed, backs up prior to the script, and runs the script again, to create the column with the correct type.

One last point. Now that we have scripts being applied and reapplied due to the two scenarios mentioned above, there is another change that we need to make when authoring the scripts so that we don’t lose data unnecessarily. Everyone knows to use RENAME instead of DROP and ADD. However let’s take the simple case of adding a new column. The script might look like this:

alter table Sales.Customers add Gender char(1);

Of course we would also want an undo operation:

alter table Sales.Customers add Gender char(1);
--//@UNDO
alter table Sales.Customers drop Gender;

After the script has been applied, if we have been using the database and adding gender information, but for some reason we need to roll back and forward again we lose all our gender data. All we need to do is place the information into a temporary table prior to dropping. We need to save both the PrimaryKey and the dropped column. So a solution for SQL Server might look like this:

alter table Sales.Customers add Gender char(1);
if object_id('tempdb..#sales_customer_gender') is not null
begin
	update Sales.Customers set
		Sales.Customers.Gender = tmp.Gender
	from Sales.Customers c
	inner join #Sales_Customer_Gender tmp on c.CustomerId = tmp.CustomerId
	drop table #Sales_Customer_Gender
end
--//@UNDO
select * into #Sales_Customers_Gender from (
	select CustomerId, Gender from Sales.Customers
) as genders;
alter table Sales.Customers drop Gender;

It is a little more work, but it maintains the data, which is the primary goal of tools like DbDeploy anyway.

There are two types of database deployment tools. These are generally categorized automatic and manual. The problem with the automatic kind, is that it can’t always figure out what to do. Here are some examples of things that automatic database migrations can’t figure out, but are fairly easy to code up manually.
1) When adding columns without defaults, what data should be used to fill in the values?
2) When splitting or moving columns, where does the existing data go?
3) When renaming a column, how does the tool detect that it isn’t just a drop and an add?
4) How should the script alter data when adding a constraint that renders it invalid?
5) When changing the data type of a column how should the existing data be converted?
6) What if data needs to be changed outside of or because of a schema change?
These reasons were paraphrased from this article.

Because of these issues, for large complicated databases or databases with a lot of critical data most developers end up choosing explicit manual migrations. There are several tools of this nature out there, but the most widely known is DbDeploy.

DbDeploy itself is a Java program (http://dbdeploy.com/, https://code.google.com/p/dbdeploy/wiki/GettingStarted). It is one of the few database deployment programs that supports both Oracle as well as moving forwards and backwards through the deltas. It is old (the second version came out in early 2007), but well used and respected by the community.

In 2007 DbDeploy.Net 1.0 was released (http://sourceforge.net/projects/dbdeploy-net/). It is called 1.0 even though it comes from the 2.0 version of the Java code. It was released and then kind of sat there on SourceForge because
1) .NET developers weren’t heavily contributing to Open Source in general
2) SourceForge as a code hosting service was becoming less popular.
3) It did was it was designed to do, and no changes are necessary unless someone dreams up a new feature.

Anyway, fast forward to 2009 and DbDeploy 3.0 for Java is released. Also during these years we are starting to see GitHub emerge as the Open Source market leader. Now here is where the problem comes in. Without notifying anyone there is GitHib repository created for the DbDeploy (https://github.com/tackley/dbdeploy) and the DbDeploy.Net projects (https://github.com/brunomlopes/dbdeploy.net). However, there is no mention of it anywhere, so unless you specifically go and look there, you wouldn’t know that.

In 2012, Robert May aka rakker91 ports the Java 3.0 version to .NET, calls it DbDeploy 2 (even though it came from the Java version 3.0) and posts it on CodePlex (a Microsoft open source host). But again, unless you know to look there, nothing.

In 2013 Gregg Jensen makes the first significant outside contribution to DbDeploy.Net 1 in a while. As he does this he notes on the original SourceForge page:
“dbdeploy.NET has been updated by the community on GitHub. New documentation and features have been added at github.com/brunomlopes/dbdeploy.net. I have used dbdeploy.NET for a while, and I like how it works so I contributed there. Gregg”
This is the first breadcrumb left so that someone from the outside world could actually discover something is happening to the project.

Soon thereafter (July 2013) DbDeploy.NET 2 (the new code base) is formally released on Codeplex.

So in short we have two different independently evolving code bases to choose from. This is a problem in itself, because it means there has to be time spent investigating which code base is the correct one to start from. I downloaded both versions and started poking around. Here is what I found out:
1) DbDeploy.Net 1: Unit tests did not run, and it uses a schema that is harder to support with older versions of Oracle (uses an identity column which would need to be a sequence).
2) DbDeploy.Net 2 (from Java 3): Dropped Oracle support. This is something that is often done accidentally in a rewrite, but this was due to a lack of an Oracle database to test against.

I think Oracle is actually one of the most important use cases for DbDeploy.NET for several reasons. The first is that there are a ton of legacy Oracle databases, where as a lot of SQL Server were written or rewritten with Entity Framework, which provides migrations out of the box. Also, there are a multitude of similar tools for SQL Server, but the database deployment tools for Oracle are lacking in many ways.

Anyway, I started tackling the problems in both codebases, and believe it or not it was actually easier for me to add Oracle support to the new program than it was for me to fix the unit tests and schema problems with the old program. So expect check-ins soon.

I don’t know why I have to blog about this. It depresses me and reflects poorly on our entire industry. It is 2014! Don’t we know better?!?

Why God? Why?!?

Apparently not. I have found myself *several* times in the past month, having to argue against a big rewrite of many thousands of lines of code. I am amazed and appalled that anyone still thinks this way, after *so* many articles have been written for *so* many years. Two of my favorites are Joel in the year 2000, and this more recent article that references one of my favorite cartoons (thanks to Lance for introducing me to the cartoon).

One thing that is different from when Joel wrote that blog post 14 years ago is that refactoring software is so much easier now. It is now *incredibly* easy, in addition to being much safer.

I don’t want to go through the same arguments again, because so many others have done it for me. Unlike some of the people I have to convince, I am not stupid enough to think that I am the first person faced with this decision, or arrogant enough to dismiss what those hundreds of other people have said. However, I will offer one small piece of advice. Often times the refactoring can happen *while* people are designing what the rewritten version is going to look like. At that point the code will be easy to change and it will be a much simpler process to add the new features.

This blog was cross posted on the Crafting Bytes blog at Refactor vs Rewrite (again)

When I started my next project I switched from WatiN to Selenium, and I incorporated the PageObjectModel. I had recently watched John Somnez’s video’s Pluralsight videos around this topic (http://simpleprogrammer.com/2013/09/28/creating-automated-testing-framework-selenium/) , so a lot of his ideas were shining through. There was a Pages class which had static properties to all of the Page objects.

Here are some of the highlights of that solution. We created some additional extension methods for any web element to be able to perform some common functions. Because Selenium’s FindElement normally only looks under an element, and we needed a way of looking above an element, we modified this hack using XPath parent axis. Another really useful function is the ability to extract table information.

    
    public static class WebElementExtensions
    {
        public static IWebElement GetParent(this IWebElement element)
        {
            return element.FindElement(By.XPath("parent::*"));
        }

        public static IWebElement FindParentByClassName(
            this IWebElement element, string className)
        {
            if (element == null)
            {
                return null;
            }

            var classValue = element.GetAttribute("class");
            if (classValue.Contains(className))
            {
                return element;
            }

            return FindParentByClassName(element.GetParent(), className);
        }

        public static List<string[]> ToTable(this IWebElement element)
        {
            var rows = new List<string[]>();
            foreach (var tr in element.FindElements(By.TagName("tr")))
            {
                var thOrTds = tr.FindElements(By.TagName("th"))
                    .Union(tr.FindElements(By.TagName("td")));
                rows.Add(thOrTds.Select(c => c.Text).ToArray());
            }

            return rows;
        }

In addition to the normal page object model there are often times menus, or toolbars, that cross pages. The original way we did this was just to use the Base classes, but we soon started needing the base classes for things like steps in a wizard. So instead we moved those to extensions as well, based off the BasePage. So when we created a new page that used an exiting menu partial we could use the extension methods to call those the methods easily without any modifications. We found the easiest way to do this was based off empty interfaces, because extension methods don’t really support attributes and we needed someway of describing which extension methods were legal on which objects.

public interface IHaveAdminMenu
{
}

public static class AdminMenuExtensions
{
    public static void AdminMenuClickItems(this IHaveAdminMenu adminMenu)
    {
        var basePage = (BasePage) adminMenu;
        basePage.Driver.FindElement(By.Id("itemsLink")).Click();
    }
}

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 4: Extension methods in Page Object Model

Whether you end up WatiN or Selenium for automating the browser actually doesn’t matter that much. Whichever mechanism you use should be hidden behind a Page Object Model. This actually took me a while to discover because it wasn’t really in your face on the WatiN and Selenium forums. In fact even once I knew about the pattern I didn’t feel the need for it at first. It was similar to having a domain controller for a couple of computers. However, as the sites I was writing and testing got more complicated, I needed a way of organizing the methods to manipulate the pages into a logical grouping. It makes sense to make an object model that encapsulates the ID, classes, tags, etc. inside a page so that they can be reused easily. Let’s look at a simple example in WatiN, prior to putting in the Page Object Model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
{
    browser = new IE("http://localhost:12345/items/details/1?test=true");
}

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
{
    browser.TextField(Find.ByName("Name"))
        .TypeTextQuickly("New item name");
    browser.TextField(Find.ByName("Description"))
        .TypeTextQuickly("This is the new item description");
    var fileUpload = browser.FileUpload(Find.ByName("pictureFile"));
    string codebase = new Uri(GetType().Assembly.CodeBase).AbsolutePath;
    string baseDir = Path.GetDirectoryName(codebase);
    string path = Path.Combine(baseDir, @"..\..\DM.png");
    fileUpload.Set(Path.GetFullPath(path));

The ?test=true in the first method is interesting, but the subject of another blog post. Instead Notice the Find.ByName(“Name”) in the second method. Now what if there is another method where I need to check the name to see what is there. And yet another where I need to both check it *and* update it. So I would have three places and four lines where that Find.ByName(“Name”) would be used.

What happens when I change the element to have a different name? Every test where I have used Find.ByName(“Name”) breaks. I have to go through and find them all and update them.

Let’s look at the same two methods, but this time with a PageObject model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
{
	browser = new IE(Pages.ItemDetails.Url);
}

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
{
	Pages.ItemDetails.SetName("New item name");
	Pages.ItemDetails.SetDetails("This is the new item description");
	Pages.ItemDetails.SetPictureFile("DM.png");

A couple of interesting things happened. The first is that the test is a lot more readable. The second is that I now have a central place to change when something from the page changes. I fix one line, and now all of the tests are running again.

So to recap, Page Object Models are great when either the pages are volatile or the same pages are being used for lots of different tests.

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 3: Page Object Model

Although Xamarin has been around for a while now, there were cross platform mobile projects where I did not recommend its use. These were generally projects that had a large number of screens. In such cases the process of creating multiple versions of every screen could make the project too difficult and time consuming to write. In these cases I might recommend going with PhoneGap if the shop has web development experience. Now that Xamarin Forms has released (at the end of May), all of the UI can go into a common layer, written in a single paradigm. This allows the vast majority of the assets to be reused across platforms. Of course it still doesn’t mean that *all* of the code will go into a common layer, just that *most* of it will.

One a personal note it is interesting that the Xamarin Forms release happened while we were in the middle of the mobile development track for the San Diego TIG. It is changing the industry, and it also changed our track. We removed the PhoneGap meeting from the end of the track after a brief discussion of the technology.

[Update: (September) One thing I have noticed since I started using Xamarin regularly is the flakiness of the product. When you pay this much for a product you expect a higher level of quality. The start debugging, then restart debugging, then stop the simulator and restart debugging again is getting old]

This blog was cross posted on the Crafting Bytes blog at Xamarin Forms changes the game

Because of the two problems I mentioned with back-door web testing (changes to layout and no JS testing), I was looking to pursue front-door web testing toward the end of 2012.

My first thought was that whatever framework I chose should have a test recorder so that writing the tests would be much easier than having to code up every little click and wait. The problem with this philosophy is that most of these test recorders generate code. It turns out that generating code in a maintainable way is hard, and all code should be maintainable, even test code. So I scrapped that path, and started looking at using a nice API to drive the browser.

I looked at two different frameworks in .NET for accomplishing this: WatiN and Selenium. Both had great feature sets and either one would have been suitable. At the time, Selenium’s documentation was way too fragmented. There were multiple versions: Selenium 1.0, Selenium RC, Selenium 2.0 , etc. Because I was new I wasn’t sure which one to use (e.g. was 2.0 stable?). I would do a search and end up on a blog post using an outdated method, or the blog post didn’t indicate which version of the API was being used. I found that WatiN’s documentation to be much clearer on the .NET side. So I went with that.

[Update: Selenium has been using 2.0 for a while, and the older documentation is becoming less relevant in search engines, so I would probably go with Selenium today]

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 2: Front-door testing tools

Llewellyn Falco and I had a conversation many years ago (June 2010?) about the best way to test Web UI. During that conversation we referred to the two classifications/mechanisms of web testing as front-door and back-door web testing. That is how I still think of the two types many years later, although I recognize that not many people in the industry use those terms.

In front-door web testing you are using the browser to drive the test, which more closely tests what the user sees, but offers limited ability to manipulate or control the data and other dependencies. The other drawback of this type of testing is if the test modifies data, there needs to be some way to get back to a clean slate after the test finishes.

In back-door web testing you call the controller or presenter directly (assumes you are using MVC pattern, or have done a good job separating the Greedy view into a presenter). The advantage of this pattern is that you can control the dependencies and data context under which the test runs more easily by using in memory repositories, mocks, and things of that nature. The main issue with this type of testing is that these controller methods return some sort of model and view name, making it difficult to test what the user sees. Because of this, you can have complete test coverage over the controllers but still have bugs in the view.

In January of 2011 ASP.NET MVC 3 was released which allowed different view engines to be used to render the views into HTML that would be sent back to the client. Because the View engines were easily pluggable and the Razor Engine was packaged separately this allowed back door testing to call the engine to produce HTML. This allowed back-door web testing to get closer to what the user was seeing and eventually resulted in Llewellyn augmenting Approval tests with a mechanism for Approving HTML.

However, there are still problems with this approach. Two of the biggest problems are

  1. changes to the layout template break all tests
  2. inability to test JavaScript manipulations of the page

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 1: Front-door and back-door testing

I just finished forming a new company called Crafting Bytes with Brad Cunningham and Ike Ellis. We wanted to start taking on bigger projects using the techniques that have made us so successful as consultants. Over time we have noticed that we do much better with the projects when we take control of the project management as well as the development rather than simply augmenting the development staff. What is different about our project management style than the project management style of other companies that we work with?

One major difference is that we don’t use scrum. In a sprint developers spend a lot of time estimating the work. Estimating can be important when the estimate is used to determine whether or not the work should take place at all. However, in most cases companies were using the estimate to inform management when the product was supposed to ship, so that they could relay that information to the customer. It would be a better idea to relay the information to the customer *after* the work has been completed. It is much more accurate that way. The other reason managers were requiring estimates was to figure out how much work should be completed this sprint. So in other words managers were requiring estimates for the sole purpose of the project management methodology they were using.

The thing is estimates take a lot of time, and they are rarely accurate. Our thought was let’s forget scrum and just go with a simple Kanban board (from the Lean school of thinking). By doing this we can save ourselves countless hours of trying to figure out how much time things are going to take, and spend more time simply doing them.

OK, so that saves a couple days every sprint, but then what is the purpose of having a project manager at all? I admit that the project managers of many companies are totally unnecessary. You know the type, they spend most of their time polling individual people “are you done yet”. They could easily be replaced by voice recognition software that recognizes the word “yes”. This isn’t really project management, it is instead project reaction. Project *management* would be managing the work of the project, prior to it starting. In short a project managers job is to figure out which work is the most important and which work is so unimportant it doesn’t need to be done at all. Great project managers remove all unnecessary tasks, that is all tasks that don’t lead to working software, and prioritize which features are the most important for the business and the user. In short they control the prioritized list.

Thinking of project management as simply controlling the list of things that need to get done and prioritizing the most important simplifies the job of the project manager and helps the team achieve minimum “time to value” – a vastly underrated metric.

This blog was cross posted on the Crafting Bytes blog at Project Management As A Prioritized List