new-tool debt, part 2

In my last post I described something called new-tool debt which you incur each time you add someone else’s module to your project; you now owe a debt of research into this new tool to determine how it works.  I continue here to describe everything I’m learning about the behind-the-scenes code within Google’s Polymer Starter Kit.

The Development Cycle

A typical development cycle then looks like the following collection of commands.

$ gulp serve

This will run a gulp session using the gulpfile.js file to rebuild the code in your app folder into a temporary folder and then to serve it up via your browser using http://localhost:5000/ as the URL.

$ vi app/index.html

I don’t actually use the vi text editor but you get the gist.  With each save of the file gulp knows that it should rebuild and then refresh your browser session.  You continue with this edit + save + review behavior in this fashion.

The Production Push

I like to mark my code with git before pushing to production and it’s a good habit to follow.

$ git status
$ git add app/index.html
$ git commit -m "Edited to cleanup home page"

The first command will show you the status of what file(s) have changed.  It would have reported that I’d edited the home page.  The second command then checks the home page into source control and the third labels that collection.

Now we’re ready to rebuild, upload to production and review the results.

$ gulp

This command then rebuilds the code in your app folder to the dist folder.

$ firebase deploy

This command uploads your code from the dist folder to your Firebase website location.

$ firebase open

This command will open a browser so that you may review your production website.


You may find that you need to order your edits so that gulp serve is happy.  The gulp session can crash if you attempt to reference, say, a custom element in your home page before you’ve saved your edits in the file(s) which create and reference your new element. In this case, save all the files in question and then run gulp serve again.


Google has included a test platform within the code they’ve distributed.  They call it the Elements Tests Runner and it’s yet another thing that’s part of your new-tool debt.  So let’s try to see how you use it.

$ gulp serve

As usual, have gulp rebuild and serve up your website.  In your browser you’ll need to manually visit the http://localhost:5000/test/index.html URL by editing your browser’s location.  At the moment, my test produces the following page when it’s completed.

Output of /test/index.html

For reasons which still remain a mystery to me, Google decided to run each custom element test twice.  This will explain what appears to be a doubling of tests.

Under-the-hood when I review the app/test/index.html file it’s pulling in a bower_components/web-component-tester.js file and then calling the WCT.loadSuites() method to list which tests it wants to run.


So we can see why each test ran twice.  There appears to be some argument that’s been selected in each pair which wants a shadow DOM, whatever that is.  This page seems to know what this is about.  I guess I’ll add a bookmark for myself and research that later to find out how it impacts my testing.

Since I’ve been radically modifying my Polymer Starter Kit to suit my own needs, I quickly broke the initial sets of tests.  I immediately dropped the greeting custom element they included, so this broke the included test.  I found it was then necessary to alter the tests being run and to even create my own custom elements and write tests for them.  And yet even though there is some coverage for my new custom elements, I get the feeling that I have only scratched the surface for what my tests should include to adequately test the boundaries.  And I don’t really know this test platform yet, to be honest.  Again, I’m confronted with my own new-tool debt for this project which I owe it before I should consider it a “safe” project.

I’d say, it’s not bad for a weekend’s worth of work but you can see the production site for yourself to see what it looks like.

In my next post, I’ll review an edit I had to make in the source code for the web-component-tester bower element.  I discovered an outstanding issue that hasn’t yet been fixed on their github site.  By altering my local version I was able to get the test suite to report the correct completion percentage.


new-tool debt

Here, I’m defining a new term for the learning debt you incur when you use a new tool within your project space.  It’s the accumulation of time you’ll spend to fully understand the new tool, what it does and doesn’t do, to know when it breaks, to recognize the new error messages it throws and to learn how to “pop the hood” and investigate further when there is trouble.

“New-tool debt” is the total time you’ll need to spend because you’ve added a new tool to your project.

Don’t get me wrong, I like many of the tools available in the world of open source.

Polymer Starter Kit

I’ve decided to use Google’s new Polymer collection of code and I find myself both excited and yet daunted at the same time.  Within moments, I have a template that’s generated lots of code for me and all within a new framework.  And yet now I have new tools that are unfamiliar to me.  I’ll walk you through the list.


Easy enough, the installation calls for the Node Package Manager but I’m already familiar with this program and of course, it’s installed already.

$ npm install

From my experience this tells npm to read the application folder’s package.json file and to update the node_modules folder with the dependencies required.


The installation of the kit requires that I first install Bower.

$ npm install -g bower

This command would install Bower globally on my system.

$ bower install

Presumably, this runs Bower in my application’s folder and uses the bower.json file in a way that’s similar to npm.  The file seems to look like package.json.


Again, I’m familiar with git already as a great source control management software.  Note that they indicated that this is an optional step which I should have paid more attention to.  It means that in a later step, the hosting site won’t pull my code from some repository nor will it pay attention to the commit status of my individual files.  This is more important than you might think since what is pushed to production should be what should be pushed to production.  This isn’t the case for what comes next in their demonstration.  Carrying on, though…

$ git init

This command would create a hidden .git folder and initialize it from what it sees in the current directory.

$ git add . && git commit -m "Add Polymer Starter Kit."

Here, I would be adding all files into the git system to manage them and then committing them for the current version of the project.


The installation also calls for Gulp to be added to my system.

$ npm install -g gulp

This command would install Gulp globally on my system.

$ gulp

What seems to be happening here is that the contents of gulpfile.js is read in by this command and then it does a number of things to collect files and to process them into a subfolder called dist under my application folder.  The term they’re using is to build the source.  That dist folder will be used again in a moment.

$ gulp serve

This command appears to be starting up a webserver on the localhost IP address and serving up the pages in my application folder.  Note that this will show updates to individual source files like app/index.html without first running another gulp command to build them.  In other words, it’s not serving up the pages in the dist folder but rather the files in my application folder.


Here in the demo Google is suggesting that I sign up for a Firebase account and push the code to this free hosting site.  I then visited the website and created an account where they generated a new virtual host and URL for me.  I then needed the command line interface for Firebase.

$ npm install -g firebase-tools

This would install the Firebase command line interface globally on my system.

$ firebase init

This command is supposed to ask some questions in order to create a firebase.json file in my application folder.  Unfortunately Google’s instructions were lacking and it’s first necessary to run the following command:

$ firebase login

This command will direct you to the Firebase website to create a login token for use with the other commands.  Having run this successfully, I was then able to run that firebase init again.  It allowed me to identify the previously-created virtual host at Firebase and it was then necessary for me to identify my application’s dist folder as the source.

$ firebase deploy

This command will then take the contents of your dist folder and push it to your virtual host on Firebase to deploy to production.  Note that this isn’t the way other hosting sites work, e.g. Heroku, which require the code to be pushed to a github repository first; the publishing event then pulls from your github repository.  Personally, I prefer the Heroku version since it means that your code should be committed in git and pushed first to the origin repository before may be deployed to production.  The Firebase version means that un-committed code can be built and deployed.  To me this doesn’t seem very controlled.  Returning to the Firebase commands…

$ firebase open

This convenience command will open a browser window to your remote virtual host at Firebase so that you can see your newly-deployed website.

New-Tool Debt

And now here is where the new tool started to rear its ugly head.  Google then asked me in the demonstration to modify the pages just slightly so that another menu item showed up.  This worked flawlessly while testing locally using the gulp serve command plus edits to two files.

No longer within the scope of the their Getting Started information, I thought I’d then push these new changes to Firebase.  This is when I began to have problems.  And since some of these are new tools for me I felt out of my depth in handling those problems.  In order to push the files from the dist folder it would be necessary to build them again:

$ gulp

And yet, gulp wasn’t happy.  After those slight edits to both app/index.html and app/elements/routing.html the code in gulpfile.js was now failing.  I reviewed the coded I’d entered in (by copying/pasting from their own website’s code) but I’d entered things flawlessly.  The problem was that Google’s own people hadn’t run gulp to re-compile their own code and here I was now troubleshooting their own demo.  Not fun.

At this point I found it easier to abandon changes on both of the edited files.  Since I’d initialized git in the folder, this was then an option for me.

What I found frustrating is that gulp was identifying a problem with a file that simply didn’t exist within the project.  Maybe this is normal for gulp but I now have the burden of of having to research what gulp does and how it fails in order to understand all this better. I’d like to use the new Polymer platform but this gulp failure is really just a wake-up call that I have new-tool debt that I haven’t paid off yet.

“…this gulp failure is really just a wake-up call that I have new-tool debt that I haven’t paid off yet.”

So be careful with what you bring into your project.  There may be a point in the near future where things are breaking for no apparent reason and others are asking you why.  The speed of your answer and your troubleshooting depends upon whether or not you’ve paid that new-tool debt with research.

one code to rule them all


Who’d have thought ten years ago that JavaScript would be so popular now?  I think we can reasonably thank Node.js, released back in 2009, for JavaScript’s enduring popularity.  It’s no longer just a browser client validation tool from its earliest use, it’s a full-blown programming language that’s reached maturity.

Officially, JavaScript’s been on the scene since 1995, over twenty years ago.  The original version was written in ten days.  It even appeared the same year as server-side but didn’t really take off as a backend coding tool until recently.  It wasn’t until Node.js’s asynchronous methodology that it could truly find its place in mainstream coding.

Standardized JavaScript

Fortunately for all of us, Netscape submitted the proposed JavaScript standard back then to Ecma International to formally get the language blessed as a standard.  Microsoft’s own version differed slightly at the time.  Having an unbiased third-party like Ecma bless the standard would allow the rest of us some relief in the browser wars that were going on among the big payers in this space.  Time has passed and we now anticipate the sixth formal JavaScript specification from Ecma to be implemented by the various browsers:  ECMAScript 6, also known as ES6 Harmony.


JavaScript Object Notation (JSON) is a useful standard for transferring and storing data.  It’s biggest competitor in this space is probably XML and its many subsets as a means of storing and identifying data.  They’re both similar in that they store data that’s marked up with the field names.  And yet they’re different in the way that markup occurs.

JSON’s popularity now is almost wholly due to Node.js’s domination of the playing field.  It’s simple to open and use JSON data within JavaScript and since Node is the platform of choice, JSON can’t help but be the favorite storage and transfer format.


I could reasonably assert that there are two types of coders out there:  1) those who haven’t used Node.js yet and 2) those who love it.  It’s an awesome concept.  Write code in JavaScript and use Node to spawn (run) it.  Node manages an event queue for you and deals with what happens when some of your code takes longer than it should (“blocking calls”). You can create an entire webserver app within a few minutes with Node and since JavaScript is such a well-known language among coders, the comfort level of the created code is higher than for alternate languages choices that are available.

“There are two types of coders out there:  1) those who haven’t used Node.js yet and 2) those who love it.”

With other languages and development platforms you scale it up by breaking your code into multiple threads of execution.  And in those other languages you have to manage inter-thread communication and timing.  In the Node.js world, though, you scale your app by having something bring up another instance of your main app itself.

Hosting a Node.js App

This new model of scaling matches nicely with a variety of cloud virtual computer providers such as Amazon and Microsoft.  Even better, a secondary market of Node.js platform providers like OpenShift and Heroku provide a space for your application to be hosted.  (Originally, you would have to create a virtual computer at Amazon, for example, install all the dependencies to run everything and then add your Node.js app.  But now, a provider like Heroku assumes that you have a Node.js app and they take care of the prep-work for you.)

If you haven’t already done so, check out Red Hat’s OpenShift website as well as Heroku.  Both offer a (typically) free tier if you accept the scalability defaults.  Both work quite well for hosting a Node.js application.  I would say that both sites offer good Getting Started documentation.  I will say that I found the Heroku site to be slightly easier as a beginner.  I’m currently hosting one Node.js app on each of them and am happy with both providers. Note that if your app needs additional “always on” (also known as “worker”) apps then you need to fully understand each provider’s pricing model before getting settled into either arrangement.  You might easily incur an approximately $50/month fee for such an app.  Otherwise, the base scalability of both providers is essentially free.

free doesn’t have to mean free

open-source  adjective  COMPUTING

denoting software for which the original source code is made freely available and may be redistributed and modified.

A lot of people seem to think that this means that you can’t charge money in an open source world.  Making the source code freely-available doesn’t mean that you can’t charge for the compiled program itself.

You might be surprised to read this GNU page on “free software”.

Actually, we encourage people who redistribute free software to charge as much as they wish or can. If a license does not permit users to make copies and sell them, it is a nonfree license. If this seems surprising to you, please read on.

The word “free” has two legitimate general meanings; it can refer either to freedom or to price. When we speak of “free software”, we’re talking about freedom, not price. (Think of “free speech”, not “free beer”.) Specifically, it means that a user is free to run the program, change the program, and redistribute the program with or without changes.

The problem then becomes the one where some of us don’t have lawyers and some of us do.  The ones who don’t have lawyers probably operate under the following assumption:

  • “since I downloaded freely-available open source software as dependencies to my own software, I can’t charge for compiled code which include those”

Conversely, this might be the competing assumption by a large corporation who has lawyers and who understands the playing space better than you or I do:

  • “even though I downloaded freely-available open source software as dependencies to my own software, I can charge for compiled code which include those”

So you should be able to see how a million or so open source coders can now be taken advantage of.  They actively participate in what they believe to be a generous social experiment and yet some of the players in the space aren’t being so generous.

As coders, we need to find the means for making money somehow for all this effort.  Open source is a great concept but it feels to me like people aren’t seeing the bigger picture here, the one in which some people are making money from your free labor and at your expense.

royalty-source code

The Concept

Here, I’m coining a new term for the software development world.  Similar to open source software, royalty-source code would be widely available for download via a system like github.  Unlike open source, however, royalty-source includes a mechanism so that each coder gets paid as programs get downloaded.  This new mechanism should fairly distribute royalties based upon some sort of weighted system for each of the dependencies.

Typical Dependencies

The following tree shows a typical collection of dependencies pulled into a new Node.js program named MyNodeProgram, as an example, with the single command npm install mongoose:

  • MyNodeProgram
    • mongoose
      • async
        • benchmark + 22 other dependent libraries
      • bson
        • nodeunit
        • one
        • benchmark
        • colors
      • hooks-fixed
        • expresso
        • should
        • underscore
      • kareem
        • acquit
        • gulp
        • gulp-mocha
        • gulp-jscs
        • istanbul
        • jscs
        • mocha
      • mongodb
        • etc
      • mpath
        • mocha
        • benchmark
      • mpromise
        • etc
      • mquery
        • etc
      • ms
        • etc
      • muri
        • etc
      • regexp-clone
        • etc
      • sliced
        • etc

Doing a directory listing of your program’s node-modules subdirectory should then reveal just how much code got pulled from github with that single npm install command earlier.

Royalty Debt

Unlike open source coding projects, royalty-source coding projects add a new concept to the business model.  Royalty debt is the added virtual cost to your project with each npm install command you invoke.

If you envisioned charging $1.00 for each download of your own program and it now contains that mongoose dependency from before, you’ve greatly diluted your own code with the inclusion of this large collection of code that somebody else created.  If you created a typical Hello World program that uses a MongoDB database and assuming that you only add a few lines of your own code, then you might only get $0.01 of that $1.00 download royalty and everybody else would get the remaining $0.99 of that sale.  Without that mongoose inclusion, it would be unlikely that anyone would want to pay you $0.01 for your code, at least in theory.  You might then accept this 1% ratio as what it takes to make money.

Instead, you might start to rethink your entire program from the ground up.  If you could remove the mongoose dependency and store your data in a simple BSON object instead you possibly could drop that dependency completely and the royalty debt it incurs.  Assuming that you no longer had any other dependencies and people are still happy to pay the $1.00 price tag, 100% of that is now yours to keep.

One big benefit to this concept of royalty debt is that it should incentivize coders to review that stack of dependencies to look for anything that’s been included and yet which doesn’t add much value to the overall project.  The coder then should remove any unwarranted code to lower the bloat effect we often see in open source projects.  Smaller code is often better code.

Pay the House

Obviously, whoever is the new github provider will charge a commission of some kind for each download.  The download price minus the commission equals the total royalty to be split up.

Hopefully this system would include balances which would allow someone with existing credits to then use them to download other code.

Royalty Distribution

Presumably, a value system would be in place which weighs each dependency against the parent program’s code with some kind of fair distribution model.  As in the above Node.js example, the benchmark code appears at least three times as a dependency and should then get a bigger piece of the pie, so to speak.

New Tools

Given that we’d now have royalty debt to think about before adding a new dependency to our project, we should have some tools to quantify that cost.  In theory, you’d run the npm install command with an extra argument which quantifies that module’s package.json dependency list.  Let’s say that you might run the following command from your MyNodeProgram folder:  npm install mongoose --cost

At this point, npm might indicate…

mongoose @4.3.5:  Cost=$0.99/$1.00

Note that this is assumed to be a guesstimate at this time since your own program’s few lines of code were compared to the collective code and dependent modules of mongoose to return this ratio.

As a coder, you now have a build-versus-buy decision:  1) build your own database code instead or 2) expect to pay for mongoose (and dependent code) with each sale/download of your own code.

Pay the Piper

Obviously, it wouldn’t be fair to the coders of your own dependencies if you under-valued your own program.  If you decided to charge only one cent for your Hello World example above with all those dependencies, you’re distributing someone else’s code for a small fraction of what it would be worth otherwise.  In this system each module author should be able to define a minimum cost for their library.  As this code gets included into parent projects they should honor this requirement.

The benefit of this minimum cost feature is that it will gradually increase the cost of increasingly-larger code.  If the total minimum cost for the mongoose module is $0.33 then the next author must charge a minimum amount which factors this into the total download price.

Final Thoughts

So why should we bother with a royalty-source code system when open source seems to be working fine?

  1. We’re not getting paid to create software
  2. Everybody expects us to be responsive to their own needs (fixing bugs/issues) without considering that we normally expect to pay for services rendered
  3. Corporations are using open source code as a means of not paying software developers
  4. Other than a virtual tip jar, there doesn’t appear to be anywhere within the open source system where the coder gets anything for their efforts
  5. It costs a lot of money to pay rent, to eat and to otherwise afford a working computer and Internet connection as a developer


I suppose most of us these days have a github repository and this blog post would be the obligatory mention of mine.  I try to work on phone apps these days (PhoneGap) and Node.js backends every chance I get.

I can’t say that I’m very active on the open source side of things.  I only manage to throw down another version about once per week or so.  I find myself fairly ignorant of what goes on with respect to active multi-person open source projects—I only really have experience in bigger Microsoft programming shops on a Visual SourceSafe repository, for example.  And I have also used Microsoft Team Foundation Server as served up via the website from the Visual Studio program itself.

Outsource Guru on Github

I may start putting some of my older projects on there, time will tell.  I tend to select things that have a tutorial component to them so that I can demonstrate to others an intro to something.

Would I add a library to github?  At the moment, I don’t think so.  Libraries are useful to other open source coders but they’re also usable to corporations who are well-financed.  I’m usually paid to code so I think I’d like to be careful what I put out there for free.

A better github paradigm

What would be nice is if you earned credits with github for every download of your code.  And then you could spend these credits by paying for your own downloads on there.  Any individual without credits would need to buy them with money first if they wanted to download code.  If you could buy a credit for $1 then in theory you ought to be able to sell a bulk of credits for $0.75 each.  Github themselves would earn the difference since they’re hosting the platform itself.

And yet, it’s the norm these days for an average open source program to be made up of other open source code as dependencies.  So the crediting scheme would necessarily need to pay out fractional royalties to the people who created those dependent portions of code.  And so, if your own code is made up of 90% of other people’s code you would only see 10% of that credit and the rest would be distributed to them.

The benefit to a system like this is that a coder like myself—who’s used to getting paid to do this—gets paid for doing this.  Anyone who downloads code has to pay a credit from the balance on their account.  Having downloaded code they’re free to then use it.  And if they then re-bundle someone else’s code into their own then it’s become part of a commissioning scheme and everyone gets paid for their effort.

“The benefit to a system like this is that a coder like myself—who’s used to getting paid to do this—gets paid for doing this.”

An additional benefit to this system is that it no longer rewards the corporations who get a free pass to download unlimited code at your expense.

So the new system would work like iTunes, perhaps.  Maybe you could buy a card in a retail store with credits and redeem them on the site.  But if you created an account and uploaded a popular library then you could start earning credits almost immediately, in theory, anyway.

flogging a dead horse

Many times in my career I’ve been at some technical crossroads which demanded a decision on my part:

  1. stay the course with some primary skillset I’d been developing or
  2. branch off on some new expertise.

If you think about it, that’s a pretty big gamble.  What will hiring managers be looking for two or even five years in the future?  What will look better on your résumé, a couple more years of experience in the old skillset or the old skills plus the two years of the new skills?  Is it possible that continuing to work with the old skill will now somehow look bad for your career?  But then, if you include too many skills does it look like you’re not focused enough on anything to actually have expertise?

Recognize Trends

I’d suggest that the following trends are appearing in the development playing space.

  • Java is no longer trusted:  Oracle’s Java was a good idea back in the early ’90s.  It allowed coders to write one set of programming which could be compiled and then distributed and run on a variety of platforms.  Several security-related issues with Java have forced many to outright ban Java from workstations within organizations.  Apple’s Safari browser blocks the plug-in for Java now and Microsoft Internet Explorer in newer versions disables Java by default.
  • Objective-C is a pain:  Apple probably should have replaced this language when it introduced iOS.  Since it only really is used for Mac OS and iOS development, a coder’s skillset in this language limits them to just Macs, iPads, iPhones and the Apple Watch.
  • JavaScript is the new black:  Open source and Node.js have invigorated the JavaScript language.  In the past it was only really used for client-side browser validations and such but today, it’s being used for almost anything on the client or the server.  PhoneGap allows cross-platform phone app development in JavaScript, threatening to destroy all competitors in this space.  In Tolkienian terms, Javascript is the one ring to rule them all.
  • C and even C++ seem dated now:  C (circa 1972) and C++ (circa 1979) are wonderful languages and yet they’re over thirty years old and that makes them seem stale to coders today.  C# (circa 2000) is now over 15 years old and is beginning to feel the same fate.
  • .net is only for Windows:  Even though Microsoft had originally intended .net to compete with Java as a multi-platform coding option, you don’t see this in practice since nobody has worked on a UNIX .net platform to allow this to take place.  The trend would be that single-platform solutions don’t have enough market share to ultimately survive the test of time.
  • Every day there are more coders entering this space:  Schools globally have been pushing technical careers over the last three decades.  Outsourcing websites and better English training and translation software are allowing people in other countries to compete more effectively with U.S.-based coders.
  • It’s not just keyboards and mice anymore:  Hand-held devices, touchscreen monitors and see-through goggles may be the norm soon.
  • Apps and stores (not programs and major versions):  It used to be that a new version of a program was delivered and a major update cost money.  An app now usually comes with unlimited updates and yet “in app purchases” still allow a stream of money for the developer.  In fact, these updates allow the developer another marketing opportunity to up-sell the customer something else.  Apple has made so much money with iTunes that Microsoft has completely re-tooled their own operating system to chase that same business model.  Google has done the same with their Android platform.

See the Future

To me, the future of coding will embrace anything that will allow one set of (familiar) code to be compiled to multiple platforms.

  1. Until the next “new, new thing” comes along, it looks like Javascript (in general) is for now the core language to know.
  2. Some interesting things appear to be coming from the Javascript ECMAScript 6 (ES6) standard.  When a sufficient number of browsers support it, this new standard (specifically) should be another good skillset to have.
  3. Node.js has enjoyed an amazing degree of implementation throughout the world in its short lifespan.  Knowing how to code to this would be in your best interest.
  4. HTML5 has been used in a fair number of high-profile websites, enough to ensure its popularity for a few more years.
  5. The github source code repository has over 30 million individual repositories in place and has built-in support in many other systems which can pull code automatically from it.  It looks like github will be around for a while.
  6. Several popular languages will likely be effectively dead soon for a variety of reasons:  Java, Objective-C, Visual Basic, C, C++, .net and Swift to name a few.

Be the Future

If you want a job as a coder in the future it’s time to start actively steering in the right direction instead of just passively continuing to use the platforms you’re now on.  If you don’t have the skills I’ve listed above then consider taking on a project to learn one or more.

If you’re currently embedded on a team that uses Java, for example, then I’d suggest that it’s going to be increasingly harder to find work elsewhere. Given that it’s becoming harder to find coding work now with all the competition it’s more critical to possess the skills that managers are looking for on a team.

Adobe PhoneGap


In 2011, Adobe purchased an existing mobile application development framework by Nitobi, branding it at that time with the name PhoneGap.  They’ve since released it into the open source arena as “Apache Cordova” although many of us still refer to it as PhoneGap.  If you’ve ever attempted to create native iOS, Android or Microsoft Phone applications then you’ll enjoy this new multi-platform approach.

Before PhoneGap, you’d have to install a development kit for each of the various platforms you wanted to target.  And then, you’d need to learn the platform language of each major player:  Objective-C or Swift for iOS, Java for Android and XAML with C# for the Microsoft Phone.  Good luck trying to then design and maintain three completely-different collections of phone app code which hope to provide the same functionality.


But now with PhoneGap, you use what you probably already know:  HTML, JavaScript and CSS.  You then either 1) compress your collection of files using a ZIP file and upload this to Adobe’s website or 2) use the popular repository to manage your software changes and then tell Adobe the location of your github.

I’ll add to this that jQuery Mobile does a great job of streamlining some of the work you can do with PhoneGap.  It includes both methods for interacting with the browser’s DOM and a nice collection of CSS for displaying and lining up the widgets your app will need, for example, phone-styled push buttons that are sized correctly for fingers.


An initial set of app code is created from a command-line interface, producing a collection of files you’ll need for your app.  You’ll usually focus on two files within this collection:  config.xml and www/index.html.  The first will configure the name, version and rights that your app will need and the second defines the interface.  Use any editor you’re comfortable with.  And if you’re familiar with the github source code management then this can be useful later when you build your app.

You usually develop with the help of the PhoneGap Desktop App plus a phone-specific version of the PhoneGap Developer App.  The Desktop App reads your code and serves up a development version of your application; the specific Developer App for your phone will allow you to test your code.  As you make changes in your code the Desktop App will send the new version to your phone so that you can instantly see the change.

Up to this point, none of this yet requires any build keys from Apple, Microsoft or in the case of an Android app, your own self-signed certificate.  Since PhoneGap’s Developer App itself is signed, you’re good to go.


The default set of files from PhoneGap comes pre-equipped with the Jasmine test suite built in.  Edit the www/spec/index.js file to modify the default tests, verify that the PhoneGap Desktop App is running and then execute them by bringing up the /spec folder for your application within the PhoneGap Developer App.


When you’re ready to start seeing your application as a stand-alone app you can then build it on the PhoneGap Build website.

You have two choices for pushing your code to PhoneGap Build:  1) compress your files into a ZIP file and upload it or 2) use a repository for your project and tell Adobe that location.

Since phone apps need to be digitally-signed to identify the developer it’s then necessary to upload to PhoneGap Build one or more keys for this purpose.  An iOS app will need an Apple Developer key, a Microsoft Phone app will need a Microsoft Developer key and finally, an Android app uses a self-signed certificate that you can create without Google’s knowledge/consent or paying them a fee (as is the case for the first two platforms).  The PhoneGap Build website provides enough guidance in this area.

Once built, the PhoneGap Build website provides one or more individual binary downloads on a per-platform basis as well as a QR Code scan image that you can use to direct your phone to fetch the appropriate one.

what the fork?

fork noun repository

A copy of a repository’s master code, allowing one to freely experiment with changes without affecting the original.  This also includes a complete departure for the purpose of doing something differently from the original author’s design and sometimes without intent to merge code back into the original master branch.

The question for project managers would be “Is there a natural risk for open source software projects with respect to forking?”

Open source is wonderfully organic in that it grows like a plant.  Branches are created overnight, branches may or may not be maintained by others and some simply wither away.  Like plants, some software branches are buggier than others.  We acknowledge though that our own programs are made up of a collection of the code from these repositories.  At the moment of truth when we decide to include some particular bit of code we make a reasonable assumption:

Over time, I assume that this included code won’t change so dramatically that it no longer serves my own purpose or that it will break my own code.

Can we honestly take on that risk for our own venture given the usual number of dependencies in a typical Node.js project, e.g.?  Most open source code doesn’t de-evolve quite so destructively usually since it hasn’t reached a level of fame yet.  But for some code that enjoys thousands of downloads this necessarily means that many people now are suddenly very interested in the future development of that code.  And if this public forum is anything like the inter-department conversations of an average software development company then you may assume that different parties will have differing ideas about what that future should look like.  And this then is the risk that we take on:  In the future someone else may steer that dependent code in an incompatible direction from our own.

Size Matters

You’d think that each person in the open source arena would have equal say regarding future suggestions and modifications to commonly-used code.  I don’t believe that this is the case, to be honest.  Big players like Google actively participate in the world of open source.  Given their size and the number of developers they can throw at anyone else’s code they could easily steer development efforts into a direction which suits their own needs.  In a way, you would think that the world of open source would operate on merit alone.  The reality could easily be that open source is very much at the mercy of anyone with the wherewithal to dominate the playing field.

Adobe’s PhoneGap promotes Cordova behind the scenes for multi-platform phone development.  Given their level of commitment this will likely mean that over time competing technologies may not succeed.  If you choose the wrong technology you could abruptly lose support in the future when some dependent code author decides to hang it up and do something else.  Big players like Adobe can enter this space almost overnight and quickly change the playing field for everyone else.


The Fork in the Road

Personally, I use jQuery and jQuery Mobile in my own software.  What would happen in the future if somebody there at the jQuery Foundation decided to change up the fundamental ways that the interface works with PhoneGap, the development platform I’m also using?  I then find myself as the helpless passenger inside a carriage I’m not piloting suddenly careening down some path I didn’t envision.  Not only would my code break for some single build but I might then need to re-evaluate bigger decisions like platform and dependency selections.  It might be necessary to even consider throwing labor at jQuery development as an option.  If I do choose to depart from the direction they want to go then I necessarily lose support over time as a consequence.

It just strikes me as a long-time software developer that we could use some risk management strategies for what we’re taking on into our projects as these dependencies.  Is it then necessary to read outstanding issues on a per-dependency basis and to haunt those discussions?  I hope not.  I don’t think that the average coder would have that luxury of time on their schedule.

open source

open-source adjective COMPUTING

denoting software for which the original source code is made freely available and may be redistributed and modified.

This is a big change for me, having made a career over the span of half my adult life as a software developer and usually getting paid for doing that.  In the U.S., here’s the timeline for software development:

  1. 1980-1985:  In its infancy, the earliest software developers made software for the novelty of it.  Nobody fully understood the value of computers and this software so a market didn’t really exist yet.  There wasn’t a lot of money to be made; it was a hobby and we knew it.  This early period ended in the mid-80s when business started embracing these solutions and were willing to pay for them.
  2. 1986-1999:  In its heyday, software developers during this next period would flourish.  The advent of the Internet and website solutions largely fueled this feeding frenzy.  Programmer salaries climbed to the point of six-figures for someone good.
  3. 2000-2013:  Abruptly in 2000 at the end of Bill Clinton’s last term, a huge number of software developer jobs were suddenly outsourced to foreign countries, chiefly:  India, China and Russia.  The U.S. still hasn’t fully recovered from the job losses sustained but businesses have learned that not all outsource promises actually pay off.
  4. 2014-current:  The current trend appears to be a movement away from the (licensed) Microsoft and Mac OS operating systems to any of a number of UNIX-based systems and especially browser-only software and phone applications.  Further, this trend continues with this “no fees” mentality by abandoning earlier licensing models completely.

From a personal-reward standpoint, the open source initiative has returned full circle to our earliest days.  You make software but you don’t do it thinking about the money.  Perhaps you hope that things will just work out and some money will land in your lap somehow.  Maybe this project will lead to a paid gig somewhere, who knows?  I think most young programmers are doing it to beef up their résumé and little more.

I think I would caution U.S. programmers, though.  It’s good to make free software and to make it available to other good people who do the same.  Please know, however, that there are corporations right now who are using open source to avoid paying software developers for their livelihood.  Corporations are using someone’s free labor as a means of saving costs.  I don’t think this is what the open source founders had in mind.

Corporations are using someone’s free labor as a means of saving costs.

Is this fair?  Having been a programmer for over three decades it doesn’t feel like it’s a fair playing field right now.

In order to compete in today’s market it seems like you need to program in the world of open source.  Given the current nature of open source this means that there are others who aren’t similarly contributing and yet who still enjoy the fruit of your free labor.  They’re making money and you’re not.  That’s not Capitalism, that’s essentially Feudalism.

The cost of maintaining a computer, a good Internet connection and keeping current on the latest software costs money.  Rent is expensive especially in the tech-savvy areas of our country.  The economy already is devaluing U.S. labor across the board and has done so for a little over ten years now.  In light of all this I have to ask the rhetorical question:

Why are we now giving away our high-tech labor for free?

It strikes me as a bad strategy.  Corporations outsourced a decade ago and then, presumably, learned from their lessons that outsourcing doesn’t produce quality code.  And now that we have this golden opportunity back young U.S. programmers have decided to enter the market without being paid for their labor, further devaluing the cost of software for everyone.

We can rightly blame corporations for being too greedy over the last decade.  But we then must blame ourselves if we decide to work without being paid as an industry.  Are we so afraid of the competition of outsourced foreign labor “on the cheap” that we have to fall suit and do the same?

But we then must blame ourselves if we decide to work without being paid as an industry.

If you want to contribute to the open source initiative (as I am) then I’d strongly suggest that you don’t give everything away for free.  Seriously guys, it’s time U.S. programmers found a better compromise with the consumers of software so that we can better afford to live here.