When we started writing Angular 2 apps, we had come from an AngularJS background. So of course our first forms were template driven. All we had to change from AngularJS was ng-model to ngModel and put it in a banana box. As long as the form’s validation remains simple, Template driven forms are probably the way to go primarily due to their simplicity. However as the complexity of the forms grows, especially the validation of the form, its readability and even feasibility go bad pretty quickly. The only other downside to template driven forms is that they can’t be unit tested. In our company most of these sorts of things were tested at an end-to-end level using Gherkin anyway, so less of an issue for us. But again as the complexity grows you might need to start unit testing those edge cases.

So let’s take a relatively straightforward concrete example where the validation requirements might force us into implementing them in Reactive or Model-Driven forms. One side note about the nomenclature, we try to use Model Driven in our company, because we have React projects as well, and things can get pretty confusing distinguishing between a React form and a Reactive form. The example we are going to look at is a change password form. The validation requirements are that the passwords are strong, and that they match.

First let’s take a look at the Template Driven HTML (change-password.component.html)

<div class="container">
  <div class="row">
    <h2>ChangePassword</h2>
  </div>
  <div class="row">
    <div *ngFor="let error of formErrors" class="list-group-item list-group-item-danger">{{error}}</div>
  </div>
  <form class="small-form">
    <div class="row">
      <div class="form-group">
        <label for="oldPassword">Old Password</label>
        <input id="oldPassword" type="password" name="oldPassword"
            [(ngModel)]="oldPassword" class="form-control"/>
      </div>
    </div>
    <div class="row">
      <div class="form-group">
        <label for="newPassword">New Password</label>
        <input id="newPassword" type="password" name="newPassword"
            [(ngModel)]="newPassword" class="form-control"/>
      </div>
    </div>
    <div class="row">
      <div class="form-group">
        <label for="confirmNewPassword">Confirm New Password</label>
        <input id="confirmNewPassword" type="password" name="confirmNewPassword"
            [(ngModel)]="confirmNewPassword"class="form-control"/>
      </div>
    </div>
    <div class="row">
       <button id="btnChangePassword" type="button" class="btn btn-primary"
            (click)="changePassword()">Change Password</button>
    </div>
  </form>
</div>

And here is the TypeScript (change-password.component.ts)

export class ChangePasswordComponent {
  formErrors: string[];

  oldPassword: string;
  newPassword: string;
  confirmNewPassword: string;

  constructor() { }

  changePassword() {
    // submit to server
    if (this.newPassword!==this.confirmNewPassword){
      this.formErrors=["Passwords don't match"];
    } else {
      this.formErrors=[];
    }
  }
}

So far no validation. It *is* possible to write template validation using directives, but it is much simpler using Model-driven forms. To convert over the first thing we need to remember is to add the ReactiveFormsModule to your module (app.module.ts)

import { FormsModule, ReactiveFormsModule } from '@angular/forms';

and

imports:[
 BrowserModule, FormsModule, ReactiveFormsModule

Then at the top of your component (change-password.component.ts) you will need to add:

import { FormBuilder, FormGroup } from '@angular/forms';

In the fields section of your component replace

  oldPassword: string;
  newPassword: string;
  confirmNewPassword: string;

with

  form: FormGroup;

Lastly, change the constructor to this:

constructor (protected formBuilder: FormBuilder) {
  this.form=formBuilder.group({
    oldPassword:[''],
    newPassword:[''],
    confirmNewPassword:['']
  });
}

So basically there is one field where there use to be three, but the constructor was expanded to initialize the form with those three values.
Now let’s change the HTML (change-password.component.html). First find the form element, and change

  <form class="small-form">

to

  <form [formGroup]="form" class="small-form">

Lastly, change all banana in a boxed ngModels, e.g. [(ngModel)] to formControlName. Here is an example change:

        <input id="oldPassword" type="password" name="oldPassword"
               [(ngModel)]="oldPassword" class="form-control"/>

to

        <input id="oldPassword" type="password" name="oldPassword"
               formControlName="oldPassword" class="form-control"/>

After doing that for all 3 fields. Voila! it is converted.

In Part 2 I will talk about the validation.

Angular 2.0 finally released on September 15th. We started a new project in early October, so we decided to try it out. Pretty quickly the question came up, which module loader should we use for the new application?

The Angular 2.0 tutorials use SystemJS, except for a few pages referencing Webpack. So we started leaning towards SystemJS. Then I came across a webpack article in the Angular documentation:
In it is says:

It’s an excellent alternative to the SystemJS approach we use throughout the documentation

Well, if it is such an “excellent alternative” why wasn’t it used in the documentation instead of SystemJS itself?

I also found this on Stack overflow.

Webpack is a flexible module bundler. This means that it goes further [edit: than SystemJS] and doesn’t only handle modules but also provides a way to package your application (concat files, uglify files, …). It also provides a dev server with load reload for development.

SystemJS and Webpack are different but with SystemJS, you still have work to do (with Gulp or SystemJS builder for example) to package your Angular2 application for production.

So Webpack can do more, point for Webpack.

And then I found this article

Angular 2 CLI moves from SystemJS to Webpack

Google itself is now using webpack? Game over, webpack wins.

This blog was cross posted on the Crafting Bytes blog at Webpack vs SystemJS

I recently took on a Windows Universal project. As with any multiple device project one of the goals is to share as much as possible to avoid writing the same code twice.

The is no conditional compile in XAML, so that means that separate XAML files are needed in the cases where complete sharing is not possible. Luckily the Windows Universal project structure is set up so that all you need to do to share a XAML file is move the file into the Shared folder/project.

Styles are the appropriate way of providing a consistent styling across multiple pages /sections of the application, so it makes sense to try and place those in a common area. However, there will be some styles which will be specific to the Windows or Windows Phone projects. The tricky part is finding a way to share the bulk of the style, except for those pieces which are specific.

My first thought was to have a SharedStyles.xaml in the Shared folder, and a PlatformSpecificStyles.xaml in each Windows and WindowsPhone directory. Then in the App.xaml include first the shared files followed by the specific files. Something like this:

ResourceSharing.Shared\SharedStyles.xaml

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <Style TargetType="HubSection" x:Key="HubSectionStyle">
  <Setter Property="Background" Value="Pink" />
 </Style>
</ResourceDictionary>

ResourceSharing.Windows\PlatformSpecificStyles.xaml

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Purple" />
 </Style>
</ResourceDictionary>

ResourceSharing.WindowsPhone\PlatformSpecificStyles.xaml

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Blue" />
 </Style>
</ResourceDictionary>

ResourceSharing.Shared\App.xaml

<Application
    x:Class="ResourceSharing.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
 <Application.Resources>
  <ResourceDictionary>
   <ResourceDictionary.MergedDictionaries>
    <ResourceDictionary Source="SharedStyles.xaml" />
    <ResourceDictionary Source="PlatformSpecificStyles.xaml" />
   </ResourceDictionary.MergedDictionaries>
  </ResourceDictionary>
 </Application.Resources>
</Application>

However, it turns out that doesn’t work. In order to ResourceDictionary A to reference a resource from ResourceDictionary B, the ResourceDictionary A needs to include the ResourceDictionary B itself. So the end result ended up looking like this:

ResourceSharing.Shared\SharedStyles.xaml
unchanged

ResourceSharing.Windows\Styles.xaml (renamed from PlatformSpecificStyles.xaml)

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <ResourceDictionary.MergedDictionaries>
  <ResourceDictionary Source="SharedStyles.xaml" />
 </ResourceDictionary.MergedDictionaries>
 
 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Purple" />
 </Style>
</ResourceDictionary>

ResourceSharing.WindowsPhone\Styles.xaml (renamed from PlatformSpecificStyles.xaml)

<ResourceDictionary
 xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
 xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">

 <ResourceDictionary.MergedDictionaries>
  <ResourceDictionary Source="SharedStyles.xaml" />
 </ResourceDictionary.MergedDictionaries>
 
 <Style TargetType="HubSection" BasedOn="{StaticResource HubSectionStyle}">
  <Setter Property="Foreground" Value="Blue" />
 </Style>
</ResourceDictionary>

ResourceSharing.Shared\App.xaml

<Application
    x:Class="ResourceSharingHubApp.App"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml">
 <Application.Resources>
  <ResourceDictionary>
   <ResourceDictionary.MergedDictionaries>
    <ResourceDictionary Source="Styles.xaml" /> <!-- both Shared and PlatformSpecific -->
   </ResourceDictionary.MergedDictionaries>
  </ResourceDictionary>
 </Application.Resources>
</Application>

Hope that helps

This blog was cross posted on the Crafting Bytes blog at Resource Sharing in Windows Universal Apps

DbDeploy (and its .NET counterparts) work OK when everyone is working on or checking into a single master branch.  Here is an example of the normal use case, so that we can compare to the more complicated use cases that follow.

DeveloperA needs to make a change to the database and they see the delta scripts are up to 77, so they make delta script 78 and continue testing the code that they wrote to work with those database changes.  Meanwhile DeveloperB also sees that the scripts are up to 77, and so they make a script 78 and start testing with their code changes.  Let’s say that DeveloperA finishes first and checks in.  DeveloperB goes to check-in, sees they are out of date, pulls the latest, runs the unit tests again, and blamo! – a failure (two script 78s).  At this point are faced with an annoyance that can be worked around.  They need to rollback their script 78, run the other script 78, rename their script to 79, then re-run the unit tests and check in.

Let’s take the same scenario and use date based numbering or timestamping.  The last delta script checked in on master is 140901.1047.  Notice I have switched to using decimals as the script numbers, with the number being yyMMdd.hhmm.  Developer A wants to make script change 140907.1532 and DeveloperB wants to make 140908.0854.  When DeveloperB goes to check in, pulls the latest, and runs the unit tests.  At this point the tool could roll back 140908.0854 and apply 140907.1532, then re-apply 140908.0854.  Or if you are “feeling lucky” then the tool could just “fill in the hole” and apply 140907.1532, leaving the other script alone.  Determination of whether or not to rollback could be made by whether there are undo scripts for all of the scripts that need to be rolled back.  If there are, use the rollback, if not, then just apply the missing script.

The problem gets much more complicated when there are multiple developers working on multiple feature branches.  This is more like the Git branching model.  In this scenario let’s say there are two teams, TeamA and TeamB.  Each of the teams develops a set of scripts to support their particular feature. 

Let’s say TeamA develops:
140903.1242
140909.1117
140909.1734

And TeamB develops
140904.1512
140905.0933
140911.1802

Assuming TeamA checks into master first, when TeamB gets latest they merge in TeamA’s scripts.  *This time*, after TeamB checks in, TeamA will *also* need to merge in TeamB’s scripts.  However, both teams should end up with a database that has the correct scripts inside it. The possibility exists that one teams scripts will force another teams scripts to change.  Let’s say TeamA’s script on 140903.1242 renames TableX.ColumnN to be ColumnA, and TeamB’s script on 140904.1512 uses ColumnN (in a new View that they have created for example). When TeamB gets latest and tries to run the unit tests, blamo! – error in script.  If we were “filling in the holes” it would actually be 140903.1242 that caused the error, and if we rolled back it would be 140904.1512 that would cause the error.  The point is one or more scripts that have already been applied may need to change to support the incompatibility. 

Timestamping doesn’t solve everything, but it comes pretty close. One use case that isn’t supported by timestamping is solved by hashing. Let’s take the case of a single developer or team working on their own machine, trying to figure out the right way to make a script change.  They may try the first version of a script that uses column with an integer type, but they realize that it must also allow NULLs.  They *could* create scripts to support every little change that they make, but that feels a little cumbersome, verbose, and confusing to humans trying to follow the chain of scripts.  It would be nice if the tool helped in this scenario.  So in the case of a single developer they change the script that has already been run (in fact may not even be the last script that was run) to allow NULLs.  Then the tool, sees that the script changed, backs up prior to the script, and runs the script again, to create the column with the correct type.

One last point. Now that we have scripts being applied and reapplied due to the two scenarios mentioned above, there is another change that we need to make when authoring the scripts so that we don’t lose data unnecessarily. Everyone knows to use RENAME instead of DROP and ADD. However let’s take the simple case of adding a new column. The script might look like this:

alter table Sales.Customers add Gender char(1);

Of course we would also want an undo operation:

alter table Sales.Customers add Gender char(1);
--//@UNDO
alter table Sales.Customers drop Gender;

After the script has been applied, if we have been using the database and adding gender information, but for some reason we need to roll back and forward again we lose all our gender data. All we need to do is place the information into a temporary table prior to dropping. We need to save both the PrimaryKey and the dropped column. So a solution for SQL Server might look like this:

alter table Sales.Customers add Gender char(1);
if object_id('tempdb..#sales_customer_gender') is not null
begin
	update Sales.Customers set
		Sales.Customers.Gender = tmp.Gender
	from Sales.Customers c
	inner join #Sales_Customer_Gender tmp on c.CustomerId = tmp.CustomerId
	drop table #Sales_Customer_Gender
end
--//@UNDO
select * into #Sales_Customers_Gender from (
	select CustomerId, Gender from Sales.Customers
) as genders;
alter table Sales.Customers drop Gender;

It is a little more work, but it maintains the data, which is the primary goal of tools like DbDeploy anyway.

There are two types of database deployment tools. These are generally categorized automatic and manual. The problem with the automatic kind, is that it can’t always figure out what to do. Here are some examples of things that automatic database migrations can’t figure out, but are fairly easy to code up manually.
1) When adding columns without defaults, what data should be used to fill in the values?
2) When splitting or moving columns, where does the existing data go?
3) When renaming a column, how does the tool detect that it isn’t just a drop and an add?
4) How should the script alter data when adding a constraint that renders it invalid?
5) When changing the data type of a column how should the existing data be converted?
6) What if data needs to be changed outside of or because of a schema change?
These reasons were paraphrased from this article.

Because of these issues, for large complicated databases or databases with a lot of critical data most developers end up choosing explicit manual migrations. There are several tools of this nature out there, but the most widely known is DbDeploy.

DbDeploy itself is a Java program (http://dbdeploy.com/, https://code.google.com/p/dbdeploy/wiki/GettingStarted). It is one of the few database deployment programs that supports both Oracle as well as moving forwards and backwards through the deltas. It is old (the second version came out in early 2007), but well used and respected by the community.

In 2007 DbDeploy.Net 1.0 was released (http://sourceforge.net/projects/dbdeploy-net/). It is called 1.0 even though it comes from the 2.0 version of the Java code. It was released and then kind of sat there on SourceForge because
1) .NET developers weren’t heavily contributing to Open Source in general
2) SourceForge as a code hosting service was becoming less popular.
3) It did was it was designed to do, and no changes are necessary unless someone dreams up a new feature.

Anyway, fast forward to 2009 and DbDeploy 3.0 for Java is released. Also during these years we are starting to see GitHub emerge as the Open Source market leader. Now here is where the problem comes in. Without notifying anyone there is GitHib repository created for the DbDeploy (https://github.com/tackley/dbdeploy) and the DbDeploy.Net projects (https://github.com/brunomlopes/dbdeploy.net). However, there is no mention of it anywhere, so unless you specifically go and look there, you wouldn’t know that.

In 2012, Robert May aka rakker91 ports the Java 3.0 version to .NET, calls it DbDeploy 2 (even though it came from the Java version 3.0) and posts it on CodePlex (a Microsoft open source host). But again, unless you know to look there, nothing.

In 2013 Gregg Jensen makes the first significant outside contribution to DbDeploy.Net 1 in a while. As he does this he notes on the original SourceForge page:
“dbdeploy.NET has been updated by the community on GitHub. New documentation and features have been added at github.com/brunomlopes/dbdeploy.net. I have used dbdeploy.NET for a while, and I like how it works so I contributed there. Gregg”
This is the first breadcrumb left so that someone from the outside world could actually discover something is happening to the project.

Soon thereafter (July 2013) DbDeploy.NET 2 (the new code base) is formally released on Codeplex.

So in short we have two different independently evolving code bases to choose from. This is a problem in itself, because it means there has to be time spent investigating which code base is the correct one to start from. I downloaded both versions and started poking around. Here is what I found out:
1) DbDeploy.Net 1: Unit tests did not run, and it uses a schema that is harder to support with older versions of Oracle (uses an identity column which would need to be a sequence).
2) DbDeploy.Net 2 (from Java 3): Dropped Oracle support. This is something that is often done accidentally in a rewrite, but this was due to a lack of an Oracle database to test against.

I think Oracle is actually one of the most important use cases for DbDeploy.NET for several reasons. The first is that there are a ton of legacy Oracle databases, where as a lot of SQL Server were written or rewritten with Entity Framework, which provides migrations out of the box. Also, there are a multitude of similar tools for SQL Server, but the database deployment tools for Oracle are lacking in many ways.

Anyway, I started tackling the problems in both codebases, and believe it or not it was actually easier for me to add Oracle support to the new program than it was for me to fix the unit tests and schema problems with the old program. So expect check-ins soon.

I don’t know why I have to blog about this. It depresses me and reflects poorly on our entire industry. It is 2014! Don’t we know better?!?

Why God? Why?!?

Apparently not. I have found myself *several* times in the past month, having to argue against a big rewrite of many thousands of lines of code. I am amazed and appalled that anyone still thinks this way, after *so* many articles have been written for *so* many years. Two of my favorites are Joel in the year 2000, and this more recent article that references one of my favorite cartoons (thanks to Lance for introducing me to the cartoon).

One thing that is different from when Joel wrote that blog post 14 years ago is that refactoring software is so much easier now. It is now *incredibly* easy, in addition to being much safer.

I don’t want to go through the same arguments again, because so many others have done it for me. Unlike some of the people I have to convince, I am not stupid enough to think that I am the first person faced with this decision, or arrogant enough to dismiss what those hundreds of other people have said. However, I will offer one small piece of advice. Often times the refactoring can happen *while* people are designing what the rewritten version is going to look like. At that point the code will be easy to change and it will be a much simpler process to add the new features.

This blog was cross posted on the Crafting Bytes blog at Refactor vs Rewrite (again)

When I started my next project I switched from WatiN to Selenium, and I incorporated the PageObjectModel. I had recently watched John Somnez’s video’s Pluralsight videos around this topic (http://simpleprogrammer.com/2013/09/28/creating-automated-testing-framework-selenium/) , so a lot of his ideas were shining through. There was a Pages class which had static properties to all of the Page objects.

Here are some of the highlights of that solution. We created some additional extension methods for any web element to be able to perform some common functions. Because Selenium’s FindElement normally only looks under an element, and we needed a way of looking above an element, we modified this hack using XPath parent axis. Another really useful function is the ability to extract table information.

    
    public static class WebElementExtensions
    {
        public static IWebElement GetParent(this IWebElement element)
        {
            return element.FindElement(By.XPath("parent::*"));
        }

        public static IWebElement FindParentByClassName(
            this IWebElement element, string className)
        {
            if (element == null)
            {
                return null;
            }

            var classValue = element.GetAttribute("class");
            if (classValue.Contains(className))
            {
                return element;
            }

            return FindParentByClassName(element.GetParent(), className);
        }

        public static List<string[]> ToTable(this IWebElement element)
        {
            var rows = new List<string[]>();
            foreach (var tr in element.FindElements(By.TagName("tr")))
            {
                var thOrTds = tr.FindElements(By.TagName("th"))
                    .Union(tr.FindElements(By.TagName("td")));
                rows.Add(thOrTds.Select(c => c.Text).ToArray());
            }

            return rows;
        }

In addition to the normal page object model there are often times menus, or toolbars, that cross pages. The original way we did this was just to use the Base classes, but we soon started needing the base classes for things like steps in a wizard. So instead we moved those to extensions as well, based off the BasePage. So when we created a new page that used an exiting menu partial we could use the extension methods to call those the methods easily without any modifications. We found the easiest way to do this was based off empty interfaces, because extension methods don’t really support attributes and we needed someway of describing which extension methods were legal on which objects.

public interface IHaveAdminMenu
{
}

public static class AdminMenuExtensions
{
    public static void AdminMenuClickItems(this IHaveAdminMenu adminMenu)
    {
        var basePage = (BasePage) adminMenu;
        basePage.Driver.FindElement(By.Id("itemsLink")).Click();
    }
}

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 4: Extension methods in Page Object Model

Whether you end up WatiN or Selenium for automating the browser actually doesn’t matter that much. Whichever mechanism you use should be hidden behind a Page Object Model. This actually took me a while to discover because it wasn’t really in your face on the WatiN and Selenium forums. In fact even once I knew about the pattern I didn’t feel the need for it at first. It was similar to having a domain controller for a couple of computers. However, as the sites I was writing and testing got more complicated, I needed a way of organizing the methods to manipulate the pages into a logical grouping. It makes sense to make an object model that encapsulates the ID, classes, tags, etc. inside a page so that they can be reused easily. Let’s look at a simple example in WatiN, prior to putting in the Page Object Model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
{
    browser = new IE("http://localhost:12345/items/details/1?test=true");
}

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
{
    browser.TextField(Find.ByName("Name"))
        .TypeTextQuickly("New item name");
    browser.TextField(Find.ByName("Description"))
        .TypeTextQuickly("This is the new item description");
    var fileUpload = browser.FileUpload(Find.ByName("pictureFile"));
    string codebase = new Uri(GetType().Assembly.CodeBase).AbsolutePath;
    string baseDir = Path.GetDirectoryName(codebase);
    string path = Path.Combine(baseDir, @"..\..\DM.png");
    fileUpload.Set(Path.GetFullPath(path));

The ?test=true in the first method is interesting, but the subject of another blog post. Instead Notice the Find.ByName(“Name”) in the second method. Now what if there is another method where I need to check the name to see what is there. And yet another where I need to both check it *and* update it. So I would have three places and four lines where that Find.ByName(“Name”) would be used.

What happens when I change the element to have a different name? Every test where I have used Find.ByName(“Name”) breaks. I have to go through and find them all and update them.

Let’s look at the same two methods, but this time with a PageObject model.

[Given(@"I am on an item details page")]
public void GivenIAmOnAnItemDetailsPage()
{
	browser = new IE(Pages.ItemDetails.Url);
}

[When(@"I update the item information")]
public void WhenIUpdateTheItemInformation()
{
	Pages.ItemDetails.SetName("New item name");
	Pages.ItemDetails.SetDetails("This is the new item description");
	Pages.ItemDetails.SetPictureFile("DM.png");

A couple of interesting things happened. The first is that the test is a lot more readable. The second is that I now have a central place to change when something from the page changes. I fix one line, and now all of the tests are running again.

So to recap, Page Object Models are great when either the pages are volatile or the same pages are being used for lots of different tests.

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 3: Page Object Model

Although Xamarin has been around for a while now, there were cross platform mobile projects where I did not recommend its use. These were generally projects that had a large number of screens. In such cases the process of creating multiple versions of every screen could make the project too difficult and time consuming to write. In these cases I might recommend going with PhoneGap if the shop has web development experience. Now that Xamarin Forms has released (at the end of May), all of the UI can go into a common layer, written in a single paradigm. This allows the vast majority of the assets to be reused across platforms. Of course it still doesn’t mean that *all* of the code will go into a common layer, just that *most* of it will.

One a personal note it is interesting that the Xamarin Forms release happened while we were in the middle of the mobile development track for the San Diego TIG. It is changing the industry, and it also changed our track. We removed the PhoneGap meeting from the end of the track after a brief discussion of the technology.

[Update: (September) One thing I have noticed since I started using Xamarin regularly is the flakiness of the product. When you pay this much for a product you expect a higher level of quality. The start debugging, then restart debugging, then stop the simulator and restart debugging again is getting old]

This blog was cross posted on the Crafting Bytes blog at Xamarin Forms changes the game

Because of the two problems I mentioned with back-door web testing (changes to layout and no JS testing), I was looking to pursue front-door web testing toward the end of 2012.

My first thought was that whatever framework I chose should have a test recorder so that writing the tests would be much easier than having to code up every little click and wait. The problem with this philosophy is that most of these test recorders generate code. It turns out that generating code in a maintainable way is hard, and all code should be maintainable, even test code. So I scrapped that path, and started looking at using a nice API to drive the browser.

I looked at two different frameworks in .NET for accomplishing this: WatiN and Selenium. Both had great feature sets and either one would have been suitable. At the time, Selenium’s documentation was way too fragmented. There were multiple versions: Selenium 1.0, Selenium RC, Selenium 2.0 , etc. Because I was new I wasn’t sure which one to use (e.g. was 2.0 stable?). I would do a search and end up on a blog post using an outdated method, or the blog post didn’t indicate which version of the API was being used. I found that WatiN’s documentation to be much clearer on the .NET side. So I went with that.

[Update: Selenium has been using 2.0 for a while, and the older documentation is becoming less relevant in search engines, so I would probably go with Selenium today]

This blog was cross posted on the Crafting Bytes blog at Web UI Testing Part 2: Front-door testing tools